00:00:00.001 Started by upstream project "autotest-per-patch" build number 132352 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.105 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.167 Fetching changes from the remote Git repository 00:00:00.169 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.230 Using shallow fetch with depth 1 00:00:00.230 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.230 > git --version # timeout=10 00:00:00.274 > git --version # 'git version 2.39.2' 00:00:00.274 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.299 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.299 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.956 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.968 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.980 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.980 > git config core.sparsecheckout # timeout=10 00:00:07.992 > git read-tree -mu HEAD # timeout=10 00:00:08.006 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.030 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.030 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.152 [Pipeline] Start of Pipeline 00:00:08.167 [Pipeline] library 00:00:08.168 Loading library shm_lib@master 00:00:08.169 Library shm_lib@master is cached. Copying from home. 00:00:08.185 [Pipeline] node 00:00:08.194 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest 00:00:08.196 [Pipeline] { 00:00:08.208 [Pipeline] catchError 00:00:08.209 [Pipeline] { 00:00:08.221 [Pipeline] wrap 00:00:08.228 [Pipeline] { 00:00:08.235 [Pipeline] stage 00:00:08.236 [Pipeline] { (Prologue) 00:00:08.248 [Pipeline] echo 00:00:08.250 Node: VM-host-SM17 00:00:08.254 [Pipeline] cleanWs 00:00:08.261 [WS-CLEANUP] Deleting project workspace... 00:00:08.261 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.266 [WS-CLEANUP] done 00:00:08.559 [Pipeline] setCustomBuildProperty 00:00:08.642 [Pipeline] httpRequest 00:00:09.030 [Pipeline] echo 00:00:09.032 Sorcerer 10.211.164.20 is alive 00:00:09.041 [Pipeline] retry 00:00:09.043 [Pipeline] { 00:00:09.056 [Pipeline] httpRequest 00:00:09.060 HttpMethod: GET 00:00:09.061 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.062 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.086 Response Code: HTTP/1.1 200 OK 00:00:09.086 Success: Status code 200 is in the accepted range: 200,404 00:00:09.087 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.383 [Pipeline] } 00:00:24.398 [Pipeline] // retry 00:00:24.405 [Pipeline] sh 00:00:24.682 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.692 [Pipeline] httpRequest 00:00:24.990 [Pipeline] echo 00:00:24.992 Sorcerer 10.211.164.20 is alive 00:00:25.001 [Pipeline] retry 00:00:25.004 [Pipeline] { 00:00:25.017 [Pipeline] httpRequest 00:00:25.022 HttpMethod: GET 00:00:25.023 URL: http://10.211.164.20/packages/spdk_097b7c969529bf77a5d961c702f9a5819ca2b660.tar.gz 00:00:25.024 Sending request to url: http://10.211.164.20/packages/spdk_097b7c969529bf77a5d961c702f9a5819ca2b660.tar.gz 00:00:25.029 Response Code: HTTP/1.1 200 OK 00:00:25.029 Success: Status code 200 is in the accepted range: 200,404 00:00:25.030 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_097b7c969529bf77a5d961c702f9a5819ca2b660.tar.gz 00:03:58.856 [Pipeline] } 00:03:58.873 [Pipeline] // retry 00:03:58.881 [Pipeline] sh 00:03:59.159 + tar --no-same-owner -xf spdk_097b7c969529bf77a5d961c702f9a5819ca2b660.tar.gz 00:04:02.506 [Pipeline] sh 00:04:02.784 + git -C spdk log --oneline -n5 00:04:02.784 097b7c969 test/nvmf: Drop $RDMA_IP_LIST 00:04:02.784 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:04:02.784 6f7b42a3a test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:04:02.784 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:04:02.784 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:04:02.801 [Pipeline] writeFile 00:04:02.814 [Pipeline] sh 00:04:03.094 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:03.105 [Pipeline] sh 00:04:03.385 + cat autorun-spdk.conf 00:04:03.385 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:03.385 SPDK_RUN_ASAN=1 00:04:03.385 SPDK_RUN_UBSAN=1 00:04:03.385 SPDK_TEST_RAID=1 00:04:03.385 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:03.392 RUN_NIGHTLY=0 00:04:03.394 [Pipeline] } 00:04:03.408 [Pipeline] // stage 00:04:03.425 [Pipeline] stage 00:04:03.427 [Pipeline] { (Run VM) 00:04:03.441 [Pipeline] sh 00:04:03.721 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:03.721 + echo 'Start stage prepare_nvme.sh' 00:04:03.721 Start stage prepare_nvme.sh 00:04:03.721 + [[ -n 5 ]] 00:04:03.721 + disk_prefix=ex5 00:04:03.721 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:04:03.721 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:04:03.721 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:04:03.721 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:03.721 ++ SPDK_RUN_ASAN=1 00:04:03.721 ++ SPDK_RUN_UBSAN=1 00:04:03.721 ++ SPDK_TEST_RAID=1 00:04:03.721 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:03.721 ++ RUN_NIGHTLY=0 00:04:03.721 + cd /var/jenkins/workspace/raid-vg-autotest 00:04:03.721 + nvme_files=() 00:04:03.721 + declare -A nvme_files 00:04:03.721 + backend_dir=/var/lib/libvirt/images/backends 00:04:03.721 + nvme_files['nvme.img']=5G 00:04:03.721 + nvme_files['nvme-cmb.img']=5G 00:04:03.721 + nvme_files['nvme-multi0.img']=4G 00:04:03.721 + nvme_files['nvme-multi1.img']=4G 00:04:03.721 + nvme_files['nvme-multi2.img']=4G 00:04:03.721 + nvme_files['nvme-openstack.img']=8G 00:04:03.721 + nvme_files['nvme-zns.img']=5G 00:04:03.721 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:03.721 + (( SPDK_TEST_FTL == 1 )) 00:04:03.721 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:03.721 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:03.721 + for nvme in "${!nvme_files[@]}" 00:04:03.721 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:04:03.721 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:03.721 + for nvme in "${!nvme_files[@]}" 00:04:03.721 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:04:03.721 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:03.721 + for nvme in "${!nvme_files[@]}" 00:04:03.721 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:04:03.721 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:03.721 + for nvme in "${!nvme_files[@]}" 00:04:03.721 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:04:03.721 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:03.721 + for nvme in "${!nvme_files[@]}" 00:04:03.721 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:04:03.721 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:03.721 + for nvme in "${!nvme_files[@]}" 00:04:03.722 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:04:03.722 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:03.722 + for nvme in "${!nvme_files[@]}" 00:04:03.722 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:04:03.722 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:03.722 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:04:03.722 + echo 'End stage prepare_nvme.sh' 00:04:03.722 End stage prepare_nvme.sh 00:04:03.734 [Pipeline] sh 00:04:04.015 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:04.015 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:04:04.015 00:04:04.015 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:04:04.015 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:04:04.015 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:04:04.015 HELP=0 00:04:04.015 DRY_RUN=0 00:04:04.015 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:04:04.015 NVME_DISKS_TYPE=nvme,nvme, 00:04:04.015 NVME_AUTO_CREATE=0 00:04:04.015 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:04:04.015 NVME_CMB=,, 00:04:04.015 NVME_PMR=,, 00:04:04.015 NVME_ZNS=,, 00:04:04.015 NVME_MS=,, 00:04:04.015 NVME_FDP=,, 00:04:04.015 SPDK_VAGRANT_DISTRO=fedora39 00:04:04.015 SPDK_VAGRANT_VMCPU=10 00:04:04.015 SPDK_VAGRANT_VMRAM=12288 00:04:04.015 SPDK_VAGRANT_PROVIDER=libvirt 00:04:04.015 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:04:04.015 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:04.015 SPDK_OPENSTACK_NETWORK=0 00:04:04.015 VAGRANT_PACKAGE_BOX=0 00:04:04.015 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:04:04.015 FORCE_DISTRO=true 00:04:04.015 VAGRANT_BOX_VERSION= 00:04:04.015 EXTRA_VAGRANTFILES= 00:04:04.015 NIC_MODEL=e1000 00:04:04.015 00:04:04.015 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:04:04.015 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:04:06.547 Bringing machine 'default' up with 'libvirt' provider... 00:04:07.115 ==> default: Creating image (snapshot of base box volume). 00:04:07.115 ==> default: Creating domain with the following settings... 00:04:07.115 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732086064_4e3663ebd98a00e31079 00:04:07.115 ==> default: -- Domain type: kvm 00:04:07.115 ==> default: -- Cpus: 10 00:04:07.115 ==> default: -- Feature: acpi 00:04:07.115 ==> default: -- Feature: apic 00:04:07.115 ==> default: -- Feature: pae 00:04:07.115 ==> default: -- Memory: 12288M 00:04:07.115 ==> default: -- Memory Backing: hugepages: 00:04:07.115 ==> default: -- Management MAC: 00:04:07.115 ==> default: -- Loader: 00:04:07.115 ==> default: -- Nvram: 00:04:07.115 ==> default: -- Base box: spdk/fedora39 00:04:07.115 ==> default: -- Storage pool: default 00:04:07.115 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732086064_4e3663ebd98a00e31079.img (20G) 00:04:07.115 ==> default: -- Volume Cache: default 00:04:07.115 ==> default: -- Kernel: 00:04:07.115 ==> default: -- Initrd: 00:04:07.115 ==> default: -- Graphics Type: vnc 00:04:07.115 ==> default: -- Graphics Port: -1 00:04:07.115 ==> default: -- Graphics IP: 127.0.0.1 00:04:07.115 ==> default: -- Graphics Password: Not defined 00:04:07.115 ==> default: -- Video Type: cirrus 00:04:07.115 ==> default: -- Video VRAM: 9216 00:04:07.115 ==> default: -- Sound Type: 00:04:07.115 ==> default: -- Keymap: en-us 00:04:07.115 ==> default: -- TPM Path: 00:04:07.115 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:07.115 ==> default: -- Command line args: 00:04:07.115 ==> default: -> value=-device, 00:04:07.115 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:07.115 ==> default: -> value=-drive, 00:04:07.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:04:07.115 ==> default: -> value=-device, 00:04:07.115 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:07.115 ==> default: -> value=-device, 00:04:07.115 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:07.115 ==> default: -> value=-drive, 00:04:07.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:04:07.115 ==> default: -> value=-device, 00:04:07.115 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:07.115 ==> default: -> value=-drive, 00:04:07.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:04:07.115 ==> default: -> value=-device, 00:04:07.115 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:07.115 ==> default: -> value=-drive, 00:04:07.115 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:04:07.115 ==> default: -> value=-device, 00:04:07.115 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:07.115 ==> default: Creating shared folders metadata... 00:04:07.373 ==> default: Starting domain. 00:04:08.749 ==> default: Waiting for domain to get an IP address... 00:04:26.833 ==> default: Waiting for SSH to become available... 00:04:26.833 ==> default: Configuring and enabling network interfaces... 00:04:29.363 default: SSH address: 192.168.121.197:22 00:04:29.363 default: SSH username: vagrant 00:04:29.363 default: SSH auth method: private key 00:04:31.895 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:04:40.077 ==> default: Mounting SSHFS shared folder... 00:04:40.646 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:04:40.646 ==> default: Checking Mount.. 00:04:42.061 ==> default: Folder Successfully Mounted! 00:04:42.061 ==> default: Running provisioner: file... 00:04:42.998 default: ~/.gitconfig => .gitconfig 00:04:43.257 00:04:43.257 SUCCESS! 00:04:43.257 00:04:43.257 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:04:43.257 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:04:43.257 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:04:43.257 00:04:43.266 [Pipeline] } 00:04:43.282 [Pipeline] // stage 00:04:43.291 [Pipeline] dir 00:04:43.291 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:04:43.293 [Pipeline] { 00:04:43.306 [Pipeline] catchError 00:04:43.308 [Pipeline] { 00:04:43.321 [Pipeline] sh 00:04:43.601 + vagrant ssh-config --host vagrant 00:04:43.601 + sed -ne /^Host/,$p 00:04:43.601 + tee ssh_conf 00:04:46.894 Host vagrant 00:04:46.894 HostName 192.168.121.197 00:04:46.894 User vagrant 00:04:46.894 Port 22 00:04:46.894 UserKnownHostsFile /dev/null 00:04:46.894 StrictHostKeyChecking no 00:04:46.894 PasswordAuthentication no 00:04:46.894 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:04:46.894 IdentitiesOnly yes 00:04:46.894 LogLevel FATAL 00:04:46.894 ForwardAgent yes 00:04:46.894 ForwardX11 yes 00:04:46.894 00:04:46.908 [Pipeline] withEnv 00:04:46.909 [Pipeline] { 00:04:46.923 [Pipeline] sh 00:04:47.204 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:04:47.204 source /etc/os-release 00:04:47.204 [[ -e /image.version ]] && img=$(< /image.version) 00:04:47.204 # Minimal, systemd-like check. 00:04:47.205 if [[ -e /.dockerenv ]]; then 00:04:47.205 # Clear garbage from the node's name: 00:04:47.205 # agt-er_autotest_547-896 -> autotest_547-896 00:04:47.205 # $HOSTNAME is the actual container id 00:04:47.205 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:04:47.205 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:04:47.205 # We can assume this is a mount from a host where container is running, 00:04:47.205 # so fetch its hostname to easily identify the target swarm worker. 00:04:47.205 container="$(< /etc/hostname) ($agent)" 00:04:47.205 else 00:04:47.205 # Fallback 00:04:47.205 container=$agent 00:04:47.205 fi 00:04:47.205 fi 00:04:47.205 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:04:47.205 00:04:47.476 [Pipeline] } 00:04:47.492 [Pipeline] // withEnv 00:04:47.500 [Pipeline] setCustomBuildProperty 00:04:47.514 [Pipeline] stage 00:04:47.516 [Pipeline] { (Tests) 00:04:47.532 [Pipeline] sh 00:04:47.810 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:04:48.081 [Pipeline] sh 00:04:48.360 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:04:48.374 [Pipeline] timeout 00:04:48.374 Timeout set to expire in 1 hr 30 min 00:04:48.376 [Pipeline] { 00:04:48.389 [Pipeline] sh 00:04:48.668 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:04:49.236 HEAD is now at 097b7c969 test/nvmf: Drop $RDMA_IP_LIST 00:04:49.247 [Pipeline] sh 00:04:49.527 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:04:49.798 [Pipeline] sh 00:04:50.077 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:04:50.092 [Pipeline] sh 00:04:50.372 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:04:50.372 ++ readlink -f spdk_repo 00:04:50.372 + DIR_ROOT=/home/vagrant/spdk_repo 00:04:50.372 + [[ -n /home/vagrant/spdk_repo ]] 00:04:50.372 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:04:50.372 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:04:50.372 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:04:50.372 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:04:50.372 + [[ -d /home/vagrant/spdk_repo/output ]] 00:04:50.372 + [[ raid-vg-autotest == pkgdep-* ]] 00:04:50.372 + cd /home/vagrant/spdk_repo 00:04:50.372 + source /etc/os-release 00:04:50.372 ++ NAME='Fedora Linux' 00:04:50.372 ++ VERSION='39 (Cloud Edition)' 00:04:50.372 ++ ID=fedora 00:04:50.372 ++ VERSION_ID=39 00:04:50.372 ++ VERSION_CODENAME= 00:04:50.372 ++ PLATFORM_ID=platform:f39 00:04:50.372 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:50.372 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:50.372 ++ LOGO=fedora-logo-icon 00:04:50.372 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:50.372 ++ HOME_URL=https://fedoraproject.org/ 00:04:50.372 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:50.372 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:50.372 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:50.372 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:50.372 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:50.372 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:50.372 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:50.372 ++ SUPPORT_END=2024-11-12 00:04:50.372 ++ VARIANT='Cloud Edition' 00:04:50.372 ++ VARIANT_ID=cloud 00:04:50.372 + uname -a 00:04:50.372 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:50.372 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:50.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:50.938 Hugepages 00:04:50.938 node hugesize free / total 00:04:50.938 node0 1048576kB 0 / 0 00:04:50.938 node0 2048kB 0 / 0 00:04:50.938 00:04:50.938 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:50.938 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:50.938 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:50.938 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:50.938 + rm -f /tmp/spdk-ld-path 00:04:50.938 + source autorun-spdk.conf 00:04:50.938 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:50.938 ++ SPDK_RUN_ASAN=1 00:04:50.938 ++ SPDK_RUN_UBSAN=1 00:04:50.938 ++ SPDK_TEST_RAID=1 00:04:50.938 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:50.938 ++ RUN_NIGHTLY=0 00:04:50.938 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:50.938 + [[ -n '' ]] 00:04:50.938 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:04:51.197 + for M in /var/spdk/build-*-manifest.txt 00:04:51.197 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:51.197 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:51.197 + for M in /var/spdk/build-*-manifest.txt 00:04:51.197 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:51.197 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:51.197 + for M in /var/spdk/build-*-manifest.txt 00:04:51.197 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:51.197 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:04:51.197 ++ uname 00:04:51.197 + [[ Linux == \L\i\n\u\x ]] 00:04:51.197 + sudo dmesg -T 00:04:51.197 + sudo dmesg --clear 00:04:51.197 + dmesg_pid=5211 00:04:51.197 + [[ Fedora Linux == FreeBSD ]] 00:04:51.197 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:51.197 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:51.197 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:51.197 + sudo dmesg -Tw 00:04:51.197 + [[ -x /usr/src/fio-static/fio ]] 00:04:51.197 + export FIO_BIN=/usr/src/fio-static/fio 00:04:51.197 + FIO_BIN=/usr/src/fio-static/fio 00:04:51.197 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:51.197 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:51.197 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:51.197 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:51.197 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:51.197 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:51.197 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:51.197 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:51.197 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:51.197 07:01:48 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:51.197 07:01:48 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:51.197 07:01:48 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:51.197 07:01:48 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:04:51.197 07:01:48 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:04:51.197 07:01:48 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:04:51.197 07:01:48 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:51.197 07:01:48 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:04:51.197 07:01:48 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:51.197 07:01:48 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:51.197 07:01:48 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:04:51.197 07:01:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:51.197 07:01:48 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:51.197 07:01:48 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:51.197 07:01:48 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:51.197 07:01:48 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:51.197 07:01:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.197 07:01:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.197 07:01:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.197 07:01:48 -- paths/export.sh@5 -- $ export PATH 00:04:51.197 07:01:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:51.197 07:01:48 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:04:51.197 07:01:48 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:51.456 07:01:48 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732086108.XXXXXX 00:04:51.456 07:01:48 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732086108.pQEEwC 00:04:51.456 07:01:48 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:51.456 07:01:48 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:51.456 07:01:48 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:04:51.456 07:01:48 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:04:51.456 07:01:48 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:04:51.456 07:01:48 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:51.456 07:01:48 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:51.456 07:01:48 -- common/autotest_common.sh@10 -- $ set +x 00:04:51.456 07:01:48 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:04:51.456 07:01:48 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:51.456 07:01:48 -- pm/common@17 -- $ local monitor 00:04:51.456 07:01:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.456 07:01:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:51.456 07:01:48 -- pm/common@25 -- $ sleep 1 00:04:51.456 07:01:48 -- pm/common@21 -- $ date +%s 00:04:51.456 07:01:48 -- pm/common@21 -- $ date +%s 00:04:51.456 07:01:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732086108 00:04:51.456 07:01:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732086108 00:04:51.456 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732086108_collect-cpu-load.pm.log 00:04:51.456 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732086108_collect-vmstat.pm.log 00:04:52.391 07:01:49 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:52.391 07:01:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:52.391 07:01:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:52.391 07:01:49 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:04:52.391 07:01:49 -- spdk/autobuild.sh@16 -- $ date -u 00:04:52.391 Wed Nov 20 07:01:49 AM UTC 2024 00:04:52.391 07:01:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:52.391 v25.01-pre-203-g097b7c969 00:04:52.391 07:01:49 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:04:52.391 07:01:49 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:04:52.391 07:01:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:52.391 07:01:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:52.391 07:01:49 -- common/autotest_common.sh@10 -- $ set +x 00:04:52.392 ************************************ 00:04:52.392 START TEST asan 00:04:52.392 ************************************ 00:04:52.392 using asan 00:04:52.392 07:01:49 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:04:52.392 00:04:52.392 real 0m0.000s 00:04:52.392 user 0m0.000s 00:04:52.392 sys 0m0.000s 00:04:52.392 07:01:49 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:52.392 ************************************ 00:04:52.392 END TEST asan 00:04:52.392 ************************************ 00:04:52.392 07:01:49 asan -- common/autotest_common.sh@10 -- $ set +x 00:04:52.392 07:01:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:52.392 07:01:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:52.392 07:01:49 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:52.392 07:01:49 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:52.392 07:01:49 -- common/autotest_common.sh@10 -- $ set +x 00:04:52.392 ************************************ 00:04:52.392 START TEST ubsan 00:04:52.392 ************************************ 00:04:52.392 using ubsan 00:04:52.392 07:01:49 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:52.392 00:04:52.392 real 0m0.000s 00:04:52.392 user 0m0.000s 00:04:52.392 sys 0m0.000s 00:04:52.392 07:01:49 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:52.392 07:01:49 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:52.392 ************************************ 00:04:52.392 END TEST ubsan 00:04:52.392 ************************************ 00:04:52.392 07:01:49 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:52.392 07:01:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:52.392 07:01:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:52.392 07:01:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:52.392 07:01:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:52.392 07:01:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:52.392 07:01:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:52.392 07:01:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:52.392 07:01:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:04:52.650 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:52.650 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:52.908 Using 'verbs' RDMA provider 00:05:06.082 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:20.990 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:20.990 Creating mk/config.mk...done. 00:05:20.990 Creating mk/cc.flags.mk...done. 00:05:20.990 Type 'make' to build. 00:05:20.990 07:02:17 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:20.990 07:02:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:20.990 07:02:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:20.990 07:02:17 -- common/autotest_common.sh@10 -- $ set +x 00:05:20.990 ************************************ 00:05:20.990 START TEST make 00:05:20.990 ************************************ 00:05:20.990 07:02:17 make -- common/autotest_common.sh@1129 -- $ make -j10 00:05:20.990 make[1]: Nothing to be done for 'all'. 00:05:33.188 The Meson build system 00:05:33.188 Version: 1.5.0 00:05:33.188 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:33.188 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:33.188 Build type: native build 00:05:33.188 Program cat found: YES (/usr/bin/cat) 00:05:33.188 Project name: DPDK 00:05:33.188 Project version: 24.03.0 00:05:33.188 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:33.188 C linker for the host machine: cc ld.bfd 2.40-14 00:05:33.188 Host machine cpu family: x86_64 00:05:33.188 Host machine cpu: x86_64 00:05:33.188 Message: ## Building in Developer Mode ## 00:05:33.188 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:33.188 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:33.188 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:33.188 Program python3 found: YES (/usr/bin/python3) 00:05:33.188 Program cat found: YES (/usr/bin/cat) 00:05:33.188 Compiler for C supports arguments -march=native: YES 00:05:33.188 Checking for size of "void *" : 8 00:05:33.188 Checking for size of "void *" : 8 (cached) 00:05:33.188 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:33.188 Library m found: YES 00:05:33.188 Library numa found: YES 00:05:33.188 Has header "numaif.h" : YES 00:05:33.188 Library fdt found: NO 00:05:33.188 Library execinfo found: NO 00:05:33.188 Has header "execinfo.h" : YES 00:05:33.188 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:33.188 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:33.188 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:33.188 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:33.188 Run-time dependency openssl found: YES 3.1.1 00:05:33.188 Run-time dependency libpcap found: YES 1.10.4 00:05:33.188 Has header "pcap.h" with dependency libpcap: YES 00:05:33.188 Compiler for C supports arguments -Wcast-qual: YES 00:05:33.188 Compiler for C supports arguments -Wdeprecated: YES 00:05:33.188 Compiler for C supports arguments -Wformat: YES 00:05:33.188 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:33.188 Compiler for C supports arguments -Wformat-security: NO 00:05:33.188 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:33.188 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:33.188 Compiler for C supports arguments -Wnested-externs: YES 00:05:33.188 Compiler for C supports arguments -Wold-style-definition: YES 00:05:33.188 Compiler for C supports arguments -Wpointer-arith: YES 00:05:33.188 Compiler for C supports arguments -Wsign-compare: YES 00:05:33.188 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:33.188 Compiler for C supports arguments -Wundef: YES 00:05:33.188 Compiler for C supports arguments -Wwrite-strings: YES 00:05:33.188 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:33.188 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:33.188 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:33.188 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:33.188 Program objdump found: YES (/usr/bin/objdump) 00:05:33.188 Compiler for C supports arguments -mavx512f: YES 00:05:33.188 Checking if "AVX512 checking" compiles: YES 00:05:33.188 Fetching value of define "__SSE4_2__" : 1 00:05:33.188 Fetching value of define "__AES__" : 1 00:05:33.188 Fetching value of define "__AVX__" : 1 00:05:33.188 Fetching value of define "__AVX2__" : 1 00:05:33.188 Fetching value of define "__AVX512BW__" : (undefined) 00:05:33.188 Fetching value of define "__AVX512CD__" : (undefined) 00:05:33.188 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:33.188 Fetching value of define "__AVX512F__" : (undefined) 00:05:33.188 Fetching value of define "__AVX512VL__" : (undefined) 00:05:33.188 Fetching value of define "__PCLMUL__" : 1 00:05:33.188 Fetching value of define "__RDRND__" : 1 00:05:33.188 Fetching value of define "__RDSEED__" : 1 00:05:33.188 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:33.188 Fetching value of define "__znver1__" : (undefined) 00:05:33.188 Fetching value of define "__znver2__" : (undefined) 00:05:33.188 Fetching value of define "__znver3__" : (undefined) 00:05:33.188 Fetching value of define "__znver4__" : (undefined) 00:05:33.188 Library asan found: YES 00:05:33.188 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:33.188 Message: lib/log: Defining dependency "log" 00:05:33.188 Message: lib/kvargs: Defining dependency "kvargs" 00:05:33.188 Message: lib/telemetry: Defining dependency "telemetry" 00:05:33.188 Library rt found: YES 00:05:33.189 Checking for function "getentropy" : NO 00:05:33.189 Message: lib/eal: Defining dependency "eal" 00:05:33.189 Message: lib/ring: Defining dependency "ring" 00:05:33.189 Message: lib/rcu: Defining dependency "rcu" 00:05:33.189 Message: lib/mempool: Defining dependency "mempool" 00:05:33.189 Message: lib/mbuf: Defining dependency "mbuf" 00:05:33.189 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:33.189 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:33.189 Compiler for C supports arguments -mpclmul: YES 00:05:33.189 Compiler for C supports arguments -maes: YES 00:05:33.189 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:33.189 Compiler for C supports arguments -mavx512bw: YES 00:05:33.189 Compiler for C supports arguments -mavx512dq: YES 00:05:33.189 Compiler for C supports arguments -mavx512vl: YES 00:05:33.189 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:33.189 Compiler for C supports arguments -mavx2: YES 00:05:33.189 Compiler for C supports arguments -mavx: YES 00:05:33.189 Message: lib/net: Defining dependency "net" 00:05:33.189 Message: lib/meter: Defining dependency "meter" 00:05:33.189 Message: lib/ethdev: Defining dependency "ethdev" 00:05:33.189 Message: lib/pci: Defining dependency "pci" 00:05:33.189 Message: lib/cmdline: Defining dependency "cmdline" 00:05:33.189 Message: lib/hash: Defining dependency "hash" 00:05:33.189 Message: lib/timer: Defining dependency "timer" 00:05:33.189 Message: lib/compressdev: Defining dependency "compressdev" 00:05:33.189 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:33.189 Message: lib/dmadev: Defining dependency "dmadev" 00:05:33.189 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:33.189 Message: lib/power: Defining dependency "power" 00:05:33.189 Message: lib/reorder: Defining dependency "reorder" 00:05:33.189 Message: lib/security: Defining dependency "security" 00:05:33.189 Has header "linux/userfaultfd.h" : YES 00:05:33.189 Has header "linux/vduse.h" : YES 00:05:33.189 Message: lib/vhost: Defining dependency "vhost" 00:05:33.189 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:33.189 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:33.189 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:33.189 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:33.189 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:33.189 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:33.189 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:33.189 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:33.189 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:33.189 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:33.189 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:33.189 Configuring doxy-api-html.conf using configuration 00:05:33.189 Configuring doxy-api-man.conf using configuration 00:05:33.189 Program mandb found: YES (/usr/bin/mandb) 00:05:33.189 Program sphinx-build found: NO 00:05:33.189 Configuring rte_build_config.h using configuration 00:05:33.189 Message: 00:05:33.189 ================= 00:05:33.189 Applications Enabled 00:05:33.189 ================= 00:05:33.189 00:05:33.189 apps: 00:05:33.189 00:05:33.189 00:05:33.189 Message: 00:05:33.189 ================= 00:05:33.189 Libraries Enabled 00:05:33.189 ================= 00:05:33.189 00:05:33.189 libs: 00:05:33.189 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:33.189 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:33.189 cryptodev, dmadev, power, reorder, security, vhost, 00:05:33.189 00:05:33.189 Message: 00:05:33.189 =============== 00:05:33.189 Drivers Enabled 00:05:33.189 =============== 00:05:33.189 00:05:33.189 common: 00:05:33.189 00:05:33.189 bus: 00:05:33.189 pci, vdev, 00:05:33.189 mempool: 00:05:33.189 ring, 00:05:33.189 dma: 00:05:33.189 00:05:33.189 net: 00:05:33.189 00:05:33.189 crypto: 00:05:33.189 00:05:33.189 compress: 00:05:33.189 00:05:33.189 vdpa: 00:05:33.189 00:05:33.189 00:05:33.189 Message: 00:05:33.189 ================= 00:05:33.189 Content Skipped 00:05:33.189 ================= 00:05:33.189 00:05:33.189 apps: 00:05:33.189 dumpcap: explicitly disabled via build config 00:05:33.189 graph: explicitly disabled via build config 00:05:33.189 pdump: explicitly disabled via build config 00:05:33.189 proc-info: explicitly disabled via build config 00:05:33.189 test-acl: explicitly disabled via build config 00:05:33.189 test-bbdev: explicitly disabled via build config 00:05:33.189 test-cmdline: explicitly disabled via build config 00:05:33.189 test-compress-perf: explicitly disabled via build config 00:05:33.189 test-crypto-perf: explicitly disabled via build config 00:05:33.189 test-dma-perf: explicitly disabled via build config 00:05:33.189 test-eventdev: explicitly disabled via build config 00:05:33.189 test-fib: explicitly disabled via build config 00:05:33.189 test-flow-perf: explicitly disabled via build config 00:05:33.189 test-gpudev: explicitly disabled via build config 00:05:33.189 test-mldev: explicitly disabled via build config 00:05:33.189 test-pipeline: explicitly disabled via build config 00:05:33.189 test-pmd: explicitly disabled via build config 00:05:33.189 test-regex: explicitly disabled via build config 00:05:33.189 test-sad: explicitly disabled via build config 00:05:33.189 test-security-perf: explicitly disabled via build config 00:05:33.189 00:05:33.189 libs: 00:05:33.189 argparse: explicitly disabled via build config 00:05:33.189 metrics: explicitly disabled via build config 00:05:33.189 acl: explicitly disabled via build config 00:05:33.189 bbdev: explicitly disabled via build config 00:05:33.189 bitratestats: explicitly disabled via build config 00:05:33.189 bpf: explicitly disabled via build config 00:05:33.189 cfgfile: explicitly disabled via build config 00:05:33.189 distributor: explicitly disabled via build config 00:05:33.189 efd: explicitly disabled via build config 00:05:33.189 eventdev: explicitly disabled via build config 00:05:33.189 dispatcher: explicitly disabled via build config 00:05:33.189 gpudev: explicitly disabled via build config 00:05:33.189 gro: explicitly disabled via build config 00:05:33.189 gso: explicitly disabled via build config 00:05:33.189 ip_frag: explicitly disabled via build config 00:05:33.189 jobstats: explicitly disabled via build config 00:05:33.189 latencystats: explicitly disabled via build config 00:05:33.189 lpm: explicitly disabled via build config 00:05:33.189 member: explicitly disabled via build config 00:05:33.189 pcapng: explicitly disabled via build config 00:05:33.189 rawdev: explicitly disabled via build config 00:05:33.189 regexdev: explicitly disabled via build config 00:05:33.189 mldev: explicitly disabled via build config 00:05:33.189 rib: explicitly disabled via build config 00:05:33.189 sched: explicitly disabled via build config 00:05:33.189 stack: explicitly disabled via build config 00:05:33.189 ipsec: explicitly disabled via build config 00:05:33.189 pdcp: explicitly disabled via build config 00:05:33.189 fib: explicitly disabled via build config 00:05:33.189 port: explicitly disabled via build config 00:05:33.189 pdump: explicitly disabled via build config 00:05:33.189 table: explicitly disabled via build config 00:05:33.189 pipeline: explicitly disabled via build config 00:05:33.189 graph: explicitly disabled via build config 00:05:33.189 node: explicitly disabled via build config 00:05:33.189 00:05:33.189 drivers: 00:05:33.189 common/cpt: not in enabled drivers build config 00:05:33.189 common/dpaax: not in enabled drivers build config 00:05:33.189 common/iavf: not in enabled drivers build config 00:05:33.189 common/idpf: not in enabled drivers build config 00:05:33.189 common/ionic: not in enabled drivers build config 00:05:33.189 common/mvep: not in enabled drivers build config 00:05:33.189 common/octeontx: not in enabled drivers build config 00:05:33.189 bus/auxiliary: not in enabled drivers build config 00:05:33.189 bus/cdx: not in enabled drivers build config 00:05:33.189 bus/dpaa: not in enabled drivers build config 00:05:33.189 bus/fslmc: not in enabled drivers build config 00:05:33.189 bus/ifpga: not in enabled drivers build config 00:05:33.189 bus/platform: not in enabled drivers build config 00:05:33.189 bus/uacce: not in enabled drivers build config 00:05:33.189 bus/vmbus: not in enabled drivers build config 00:05:33.189 common/cnxk: not in enabled drivers build config 00:05:33.189 common/mlx5: not in enabled drivers build config 00:05:33.189 common/nfp: not in enabled drivers build config 00:05:33.189 common/nitrox: not in enabled drivers build config 00:05:33.189 common/qat: not in enabled drivers build config 00:05:33.189 common/sfc_efx: not in enabled drivers build config 00:05:33.189 mempool/bucket: not in enabled drivers build config 00:05:33.189 mempool/cnxk: not in enabled drivers build config 00:05:33.189 mempool/dpaa: not in enabled drivers build config 00:05:33.189 mempool/dpaa2: not in enabled drivers build config 00:05:33.189 mempool/octeontx: not in enabled drivers build config 00:05:33.189 mempool/stack: not in enabled drivers build config 00:05:33.189 dma/cnxk: not in enabled drivers build config 00:05:33.189 dma/dpaa: not in enabled drivers build config 00:05:33.189 dma/dpaa2: not in enabled drivers build config 00:05:33.189 dma/hisilicon: not in enabled drivers build config 00:05:33.189 dma/idxd: not in enabled drivers build config 00:05:33.189 dma/ioat: not in enabled drivers build config 00:05:33.189 dma/skeleton: not in enabled drivers build config 00:05:33.189 net/af_packet: not in enabled drivers build config 00:05:33.189 net/af_xdp: not in enabled drivers build config 00:05:33.189 net/ark: not in enabled drivers build config 00:05:33.189 net/atlantic: not in enabled drivers build config 00:05:33.189 net/avp: not in enabled drivers build config 00:05:33.189 net/axgbe: not in enabled drivers build config 00:05:33.189 net/bnx2x: not in enabled drivers build config 00:05:33.189 net/bnxt: not in enabled drivers build config 00:05:33.189 net/bonding: not in enabled drivers build config 00:05:33.189 net/cnxk: not in enabled drivers build config 00:05:33.189 net/cpfl: not in enabled drivers build config 00:05:33.190 net/cxgbe: not in enabled drivers build config 00:05:33.190 net/dpaa: not in enabled drivers build config 00:05:33.190 net/dpaa2: not in enabled drivers build config 00:05:33.190 net/e1000: not in enabled drivers build config 00:05:33.190 net/ena: not in enabled drivers build config 00:05:33.190 net/enetc: not in enabled drivers build config 00:05:33.190 net/enetfec: not in enabled drivers build config 00:05:33.190 net/enic: not in enabled drivers build config 00:05:33.190 net/failsafe: not in enabled drivers build config 00:05:33.190 net/fm10k: not in enabled drivers build config 00:05:33.190 net/gve: not in enabled drivers build config 00:05:33.190 net/hinic: not in enabled drivers build config 00:05:33.190 net/hns3: not in enabled drivers build config 00:05:33.190 net/i40e: not in enabled drivers build config 00:05:33.190 net/iavf: not in enabled drivers build config 00:05:33.190 net/ice: not in enabled drivers build config 00:05:33.190 net/idpf: not in enabled drivers build config 00:05:33.190 net/igc: not in enabled drivers build config 00:05:33.190 net/ionic: not in enabled drivers build config 00:05:33.190 net/ipn3ke: not in enabled drivers build config 00:05:33.190 net/ixgbe: not in enabled drivers build config 00:05:33.190 net/mana: not in enabled drivers build config 00:05:33.190 net/memif: not in enabled drivers build config 00:05:33.190 net/mlx4: not in enabled drivers build config 00:05:33.190 net/mlx5: not in enabled drivers build config 00:05:33.190 net/mvneta: not in enabled drivers build config 00:05:33.190 net/mvpp2: not in enabled drivers build config 00:05:33.190 net/netvsc: not in enabled drivers build config 00:05:33.190 net/nfb: not in enabled drivers build config 00:05:33.190 net/nfp: not in enabled drivers build config 00:05:33.190 net/ngbe: not in enabled drivers build config 00:05:33.190 net/null: not in enabled drivers build config 00:05:33.190 net/octeontx: not in enabled drivers build config 00:05:33.190 net/octeon_ep: not in enabled drivers build config 00:05:33.190 net/pcap: not in enabled drivers build config 00:05:33.190 net/pfe: not in enabled drivers build config 00:05:33.190 net/qede: not in enabled drivers build config 00:05:33.190 net/ring: not in enabled drivers build config 00:05:33.190 net/sfc: not in enabled drivers build config 00:05:33.190 net/softnic: not in enabled drivers build config 00:05:33.190 net/tap: not in enabled drivers build config 00:05:33.190 net/thunderx: not in enabled drivers build config 00:05:33.190 net/txgbe: not in enabled drivers build config 00:05:33.190 net/vdev_netvsc: not in enabled drivers build config 00:05:33.190 net/vhost: not in enabled drivers build config 00:05:33.190 net/virtio: not in enabled drivers build config 00:05:33.190 net/vmxnet3: not in enabled drivers build config 00:05:33.190 raw/*: missing internal dependency, "rawdev" 00:05:33.190 crypto/armv8: not in enabled drivers build config 00:05:33.190 crypto/bcmfs: not in enabled drivers build config 00:05:33.190 crypto/caam_jr: not in enabled drivers build config 00:05:33.190 crypto/ccp: not in enabled drivers build config 00:05:33.190 crypto/cnxk: not in enabled drivers build config 00:05:33.190 crypto/dpaa_sec: not in enabled drivers build config 00:05:33.190 crypto/dpaa2_sec: not in enabled drivers build config 00:05:33.190 crypto/ipsec_mb: not in enabled drivers build config 00:05:33.190 crypto/mlx5: not in enabled drivers build config 00:05:33.190 crypto/mvsam: not in enabled drivers build config 00:05:33.190 crypto/nitrox: not in enabled drivers build config 00:05:33.190 crypto/null: not in enabled drivers build config 00:05:33.190 crypto/octeontx: not in enabled drivers build config 00:05:33.190 crypto/openssl: not in enabled drivers build config 00:05:33.190 crypto/scheduler: not in enabled drivers build config 00:05:33.190 crypto/uadk: not in enabled drivers build config 00:05:33.190 crypto/virtio: not in enabled drivers build config 00:05:33.190 compress/isal: not in enabled drivers build config 00:05:33.190 compress/mlx5: not in enabled drivers build config 00:05:33.190 compress/nitrox: not in enabled drivers build config 00:05:33.190 compress/octeontx: not in enabled drivers build config 00:05:33.190 compress/zlib: not in enabled drivers build config 00:05:33.190 regex/*: missing internal dependency, "regexdev" 00:05:33.190 ml/*: missing internal dependency, "mldev" 00:05:33.190 vdpa/ifc: not in enabled drivers build config 00:05:33.190 vdpa/mlx5: not in enabled drivers build config 00:05:33.190 vdpa/nfp: not in enabled drivers build config 00:05:33.190 vdpa/sfc: not in enabled drivers build config 00:05:33.190 event/*: missing internal dependency, "eventdev" 00:05:33.190 baseband/*: missing internal dependency, "bbdev" 00:05:33.190 gpu/*: missing internal dependency, "gpudev" 00:05:33.190 00:05:33.190 00:05:33.190 Build targets in project: 85 00:05:33.190 00:05:33.190 DPDK 24.03.0 00:05:33.190 00:05:33.190 User defined options 00:05:33.190 buildtype : debug 00:05:33.190 default_library : shared 00:05:33.190 libdir : lib 00:05:33.190 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:33.190 b_sanitize : address 00:05:33.190 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:33.190 c_link_args : 00:05:33.190 cpu_instruction_set: native 00:05:33.190 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:33.190 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:33.190 enable_docs : false 00:05:33.190 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:33.190 enable_kmods : false 00:05:33.190 max_lcores : 128 00:05:33.190 tests : false 00:05:33.190 00:05:33.190 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:33.190 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:33.190 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:33.190 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:33.190 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:33.190 [4/268] Linking static target lib/librte_kvargs.a 00:05:33.190 [5/268] Linking static target lib/librte_log.a 00:05:33.190 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:33.190 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.190 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:33.190 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:33.190 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:33.190 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:33.190 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:33.190 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:33.190 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:33.190 [15/268] Linking static target lib/librte_telemetry.a 00:05:33.190 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:33.190 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:33.450 [18/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.450 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:33.450 [20/268] Linking target lib/librte_log.so.24.1 00:05:33.709 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:33.709 [22/268] Linking target lib/librte_kvargs.so.24.1 00:05:33.967 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:33.967 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:34.225 [25/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.225 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:34.225 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:34.225 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:34.225 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:34.225 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:34.225 [31/268] Linking target lib/librte_telemetry.so.24.1 00:05:34.225 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:34.225 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:34.483 [34/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:34.483 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:34.741 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:34.741 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:35.000 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:35.000 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:35.000 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:35.000 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:35.000 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:35.000 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:35.259 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:35.259 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:35.259 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:35.518 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:35.518 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:35.518 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:35.776 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:35.776 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:35.776 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:36.035 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:36.035 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:36.293 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:36.293 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:36.294 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:36.552 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:36.552 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:36.552 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:36.552 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:36.552 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:36.811 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:36.811 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:37.069 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:37.069 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:37.326 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:37.585 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:37.585 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:37.585 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:37.585 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:37.585 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:37.585 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:37.859 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:37.859 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:37.859 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:37.859 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:38.132 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:38.132 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:38.391 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:38.391 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:38.391 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:38.391 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:38.391 [84/268] Linking static target lib/librte_ring.a 00:05:38.391 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:38.650 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:38.650 [87/268] Linking static target lib/librte_eal.a 00:05:38.908 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:38.908 [89/268] Linking static target lib/librte_rcu.a 00:05:38.908 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:38.908 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:38.908 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.166 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:39.166 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:39.166 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:39.166 [96/268] Linking static target lib/librte_mempool.a 00:05:39.425 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:39.425 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.425 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:39.425 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:39.683 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:39.683 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:39.941 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:39.941 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:40.200 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:40.200 [106/268] Linking static target lib/librte_mbuf.a 00:05:40.200 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:40.200 [108/268] Linking static target lib/librte_meter.a 00:05:40.200 [109/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:40.200 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:40.200 [111/268] Linking static target lib/librte_net.a 00:05:40.459 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.459 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:40.459 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:40.459 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:40.718 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.718 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.977 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:41.235 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:41.494 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:41.494 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:42.060 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:42.060 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:42.060 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:42.060 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:42.060 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:42.060 [127/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:42.060 [128/268] Linking static target lib/librte_pci.a 00:05:42.060 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:42.318 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:42.318 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:42.318 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:42.577 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:42.577 [134/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:42.577 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:42.577 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:42.577 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:42.577 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:42.577 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:42.836 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:42.836 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:42.836 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:42.836 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:42.836 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:42.836 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:42.836 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:42.836 [147/268] Linking static target lib/librte_cmdline.a 00:05:43.094 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:43.353 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:43.353 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:43.353 [151/268] Linking static target lib/librte_ethdev.a 00:05:43.610 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:43.610 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:43.610 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:43.868 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:43.868 [156/268] Linking static target lib/librte_timer.a 00:05:43.868 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:44.127 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:44.386 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:44.386 [160/268] Linking static target lib/librte_hash.a 00:05:44.386 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:44.386 [162/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.386 [163/268] Linking static target lib/librte_compressdev.a 00:05:44.386 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:44.670 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:44.670 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:44.670 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:44.670 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:44.670 [169/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:44.670 [170/268] Linking static target lib/librte_dmadev.a 00:05:44.670 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:45.238 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:45.238 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:45.497 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:45.497 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.497 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:45.497 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.757 [178/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.757 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:45.757 [180/268] Linking static target lib/librte_cryptodev.a 00:05:45.757 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:45.757 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:46.014 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:46.014 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:46.273 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:46.273 [186/268] Linking static target lib/librte_power.a 00:05:46.273 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:46.531 [188/268] Linking static target lib/librte_reorder.a 00:05:46.531 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:46.531 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:46.531 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:47.098 [192/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.098 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:47.357 [194/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:47.357 [195/268] Linking static target lib/librte_security.a 00:05:47.615 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.615 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:47.872 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:47.872 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:48.130 [200/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.130 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:48.388 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.388 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:48.388 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:48.388 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:48.647 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:48.906 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:48.906 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:49.164 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:49.164 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:49.164 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:49.423 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:49.423 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:49.423 [214/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:49.423 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:49.423 [216/268] Linking static target drivers/librte_bus_pci.a 00:05:49.423 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:49.423 [218/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:49.423 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:49.423 [220/268] Linking static target drivers/librte_bus_vdev.a 00:05:49.423 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:49.682 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:49.682 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:49.682 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:49.682 [225/268] Linking static target drivers/librte_mempool_ring.a 00:05:49.682 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.942 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.510 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:50.769 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.769 [230/268] Linking target lib/librte_eal.so.24.1 00:05:51.027 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:51.027 [232/268] Linking target lib/librte_ring.so.24.1 00:05:51.027 [233/268] Linking target lib/librte_pci.so.24.1 00:05:51.027 [234/268] Linking target lib/librte_meter.so.24.1 00:05:51.027 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:51.027 [236/268] Linking target lib/librte_timer.so.24.1 00:05:51.027 [237/268] Linking target lib/librte_dmadev.so.24.1 00:05:51.027 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:51.027 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:51.286 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:51.286 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:51.286 [242/268] Linking target lib/librte_rcu.so.24.1 00:05:51.286 [243/268] Linking target lib/librte_mempool.so.24.1 00:05:51.286 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:51.286 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:51.286 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:51.286 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:51.286 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:51.286 [249/268] Linking target lib/librte_mbuf.so.24.1 00:05:51.545 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:51.545 [251/268] Linking target lib/librte_reorder.so.24.1 00:05:51.545 [252/268] Linking target lib/librte_net.so.24.1 00:05:51.545 [253/268] Linking target lib/librte_compressdev.so.24.1 00:05:51.545 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:05:51.804 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:51.804 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:51.804 [257/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.804 [258/268] Linking target lib/librte_hash.so.24.1 00:05:51.804 [259/268] Linking target lib/librte_cmdline.so.24.1 00:05:51.804 [260/268] Linking target lib/librte_security.so.24.1 00:05:51.804 [261/268] Linking target lib/librte_ethdev.so.24.1 00:05:52.064 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:52.064 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:52.064 [264/268] Linking target lib/librte_power.so.24.1 00:05:53.978 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:53.978 [266/268] Linking static target lib/librte_vhost.a 00:05:55.876 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:55.877 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:55.877 INFO: autodetecting backend as ninja 00:05:55.877 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:17.815 CC lib/ut_mock/mock.o 00:06:17.815 CC lib/ut/ut.o 00:06:17.815 CC lib/log/log_flags.o 00:06:17.815 CC lib/log/log.o 00:06:17.815 CC lib/log/log_deprecated.o 00:06:17.815 LIB libspdk_ut_mock.a 00:06:17.815 LIB libspdk_ut.a 00:06:17.815 LIB libspdk_log.a 00:06:17.815 SO libspdk_ut_mock.so.6.0 00:06:17.815 SO libspdk_ut.so.2.0 00:06:17.815 SO libspdk_log.so.7.1 00:06:17.815 SYMLINK libspdk_ut_mock.so 00:06:17.815 SYMLINK libspdk_ut.so 00:06:17.815 SYMLINK libspdk_log.so 00:06:17.815 CC lib/dma/dma.o 00:06:17.815 CC lib/util/base64.o 00:06:17.815 CXX lib/trace_parser/trace.o 00:06:17.815 CC lib/util/cpuset.o 00:06:17.815 CC lib/util/bit_array.o 00:06:17.815 CC lib/util/crc16.o 00:06:17.815 CC lib/util/crc32.o 00:06:17.815 CC lib/util/crc32c.o 00:06:17.815 CC lib/ioat/ioat.o 00:06:17.815 CC lib/vfio_user/host/vfio_user_pci.o 00:06:17.815 CC lib/vfio_user/host/vfio_user.o 00:06:17.815 CC lib/util/crc32_ieee.o 00:06:17.815 CC lib/util/crc64.o 00:06:17.815 LIB libspdk_dma.a 00:06:17.815 CC lib/util/dif.o 00:06:17.815 SO libspdk_dma.so.5.0 00:06:17.815 CC lib/util/fd.o 00:06:17.815 CC lib/util/fd_group.o 00:06:17.815 SYMLINK libspdk_dma.so 00:06:17.815 CC lib/util/file.o 00:06:17.815 CC lib/util/hexlify.o 00:06:17.815 LIB libspdk_ioat.a 00:06:17.815 SO libspdk_ioat.so.7.0 00:06:17.815 CC lib/util/iov.o 00:06:17.815 SYMLINK libspdk_ioat.so 00:06:17.815 CC lib/util/math.o 00:06:17.815 CC lib/util/net.o 00:06:17.815 CC lib/util/pipe.o 00:06:17.815 LIB libspdk_vfio_user.a 00:06:17.815 CC lib/util/strerror_tls.o 00:06:17.815 CC lib/util/string.o 00:06:17.815 SO libspdk_vfio_user.so.5.0 00:06:17.815 CC lib/util/uuid.o 00:06:17.815 CC lib/util/xor.o 00:06:17.815 SYMLINK libspdk_vfio_user.so 00:06:17.815 CC lib/util/zipf.o 00:06:17.815 CC lib/util/md5.o 00:06:17.815 LIB libspdk_util.a 00:06:17.815 SO libspdk_util.so.10.1 00:06:17.815 LIB libspdk_trace_parser.a 00:06:17.815 SO libspdk_trace_parser.so.6.0 00:06:17.815 SYMLINK libspdk_util.so 00:06:17.815 SYMLINK libspdk_trace_parser.so 00:06:17.815 CC lib/vmd/vmd.o 00:06:17.815 CC lib/env_dpdk/env.o 00:06:17.815 CC lib/env_dpdk/memory.o 00:06:17.815 CC lib/env_dpdk/pci.o 00:06:17.815 CC lib/conf/conf.o 00:06:17.815 CC lib/idxd/idxd.o 00:06:17.815 CC lib/env_dpdk/init.o 00:06:17.815 CC lib/vmd/led.o 00:06:17.815 CC lib/rdma_utils/rdma_utils.o 00:06:17.815 CC lib/json/json_parse.o 00:06:17.815 CC lib/idxd/idxd_user.o 00:06:17.815 LIB libspdk_conf.a 00:06:17.815 SO libspdk_conf.so.6.0 00:06:17.815 CC lib/json/json_util.o 00:06:17.815 LIB libspdk_rdma_utils.a 00:06:17.816 SYMLINK libspdk_conf.so 00:06:17.816 CC lib/env_dpdk/threads.o 00:06:17.816 SO libspdk_rdma_utils.so.1.0 00:06:17.816 CC lib/env_dpdk/pci_ioat.o 00:06:17.816 SYMLINK libspdk_rdma_utils.so 00:06:17.816 CC lib/env_dpdk/pci_virtio.o 00:06:17.816 CC lib/env_dpdk/pci_vmd.o 00:06:17.816 CC lib/idxd/idxd_kernel.o 00:06:17.816 CC lib/env_dpdk/pci_idxd.o 00:06:17.816 CC lib/env_dpdk/pci_event.o 00:06:17.816 CC lib/json/json_write.o 00:06:17.816 CC lib/env_dpdk/sigbus_handler.o 00:06:17.816 CC lib/env_dpdk/pci_dpdk.o 00:06:17.816 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:17.816 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:17.816 CC lib/rdma_provider/common.o 00:06:17.816 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:17.816 LIB libspdk_vmd.a 00:06:17.816 SO libspdk_vmd.so.6.0 00:06:17.816 LIB libspdk_json.a 00:06:17.816 SYMLINK libspdk_vmd.so 00:06:17.816 LIB libspdk_idxd.a 00:06:17.816 SO libspdk_json.so.6.0 00:06:17.816 SO libspdk_idxd.so.12.1 00:06:17.816 LIB libspdk_rdma_provider.a 00:06:17.816 SYMLINK libspdk_json.so 00:06:17.816 SO libspdk_rdma_provider.so.7.0 00:06:17.816 SYMLINK libspdk_idxd.so 00:06:17.816 SYMLINK libspdk_rdma_provider.so 00:06:17.816 CC lib/jsonrpc/jsonrpc_server.o 00:06:17.816 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:17.816 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:17.816 CC lib/jsonrpc/jsonrpc_client.o 00:06:18.074 LIB libspdk_jsonrpc.a 00:06:18.074 SO libspdk_jsonrpc.so.6.0 00:06:18.074 SYMLINK libspdk_jsonrpc.so 00:06:18.334 LIB libspdk_env_dpdk.a 00:06:18.334 SO libspdk_env_dpdk.so.15.1 00:06:18.334 CC lib/rpc/rpc.o 00:06:18.593 SYMLINK libspdk_env_dpdk.so 00:06:18.593 LIB libspdk_rpc.a 00:06:18.851 SO libspdk_rpc.so.6.0 00:06:18.851 SYMLINK libspdk_rpc.so 00:06:19.111 CC lib/keyring/keyring.o 00:06:19.111 CC lib/keyring/keyring_rpc.o 00:06:19.111 CC lib/notify/notify.o 00:06:19.111 CC lib/notify/notify_rpc.o 00:06:19.111 CC lib/trace/trace.o 00:06:19.111 CC lib/trace/trace_flags.o 00:06:19.111 CC lib/trace/trace_rpc.o 00:06:19.111 LIB libspdk_notify.a 00:06:19.369 SO libspdk_notify.so.6.0 00:06:19.369 LIB libspdk_trace.a 00:06:19.369 SYMLINK libspdk_notify.so 00:06:19.369 LIB libspdk_keyring.a 00:06:19.369 SO libspdk_trace.so.11.0 00:06:19.369 SO libspdk_keyring.so.2.0 00:06:19.369 SYMLINK libspdk_trace.so 00:06:19.369 SYMLINK libspdk_keyring.so 00:06:19.627 CC lib/thread/thread.o 00:06:19.627 CC lib/sock/sock.o 00:06:19.627 CC lib/sock/sock_rpc.o 00:06:19.627 CC lib/thread/iobuf.o 00:06:20.193 LIB libspdk_sock.a 00:06:20.451 SO libspdk_sock.so.10.0 00:06:20.451 SYMLINK libspdk_sock.so 00:06:20.708 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:20.708 CC lib/nvme/nvme_ctrlr.o 00:06:20.708 CC lib/nvme/nvme_fabric.o 00:06:20.708 CC lib/nvme/nvme_ns_cmd.o 00:06:20.708 CC lib/nvme/nvme_ns.o 00:06:20.708 CC lib/nvme/nvme_pcie_common.o 00:06:20.708 CC lib/nvme/nvme.o 00:06:20.708 CC lib/nvme/nvme_qpair.o 00:06:20.708 CC lib/nvme/nvme_pcie.o 00:06:21.643 CC lib/nvme/nvme_quirks.o 00:06:21.643 CC lib/nvme/nvme_transport.o 00:06:21.643 CC lib/nvme/nvme_discovery.o 00:06:21.643 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:21.643 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:21.902 LIB libspdk_thread.a 00:06:21.902 CC lib/nvme/nvme_tcp.o 00:06:21.902 SO libspdk_thread.so.11.0 00:06:21.902 CC lib/nvme/nvme_opal.o 00:06:21.902 SYMLINK libspdk_thread.so 00:06:22.160 CC lib/nvme/nvme_io_msg.o 00:06:22.160 CC lib/accel/accel.o 00:06:22.160 CC lib/nvme/nvme_poll_group.o 00:06:22.418 CC lib/nvme/nvme_zns.o 00:06:22.418 CC lib/nvme/nvme_stubs.o 00:06:22.418 CC lib/nvme/nvme_auth.o 00:06:22.418 CC lib/nvme/nvme_cuse.o 00:06:22.675 CC lib/nvme/nvme_rdma.o 00:06:22.934 CC lib/accel/accel_rpc.o 00:06:22.934 CC lib/blob/blobstore.o 00:06:23.192 CC lib/init/json_config.o 00:06:23.192 CC lib/virtio/virtio.o 00:06:23.192 CC lib/virtio/virtio_vhost_user.o 00:06:23.450 CC lib/init/subsystem.o 00:06:23.450 CC lib/init/subsystem_rpc.o 00:06:23.450 CC lib/init/rpc.o 00:06:23.708 CC lib/virtio/virtio_vfio_user.o 00:06:23.708 CC lib/virtio/virtio_pci.o 00:06:23.708 CC lib/accel/accel_sw.o 00:06:23.708 CC lib/blob/request.o 00:06:23.708 LIB libspdk_init.a 00:06:23.708 CC lib/blob/zeroes.o 00:06:23.708 SO libspdk_init.so.6.0 00:06:23.708 CC lib/blob/blob_bs_dev.o 00:06:23.708 CC lib/fsdev/fsdev.o 00:06:23.708 SYMLINK libspdk_init.so 00:06:23.708 CC lib/fsdev/fsdev_io.o 00:06:23.967 LIB libspdk_virtio.a 00:06:23.967 CC lib/fsdev/fsdev_rpc.o 00:06:23.967 SO libspdk_virtio.so.7.0 00:06:23.967 CC lib/event/app.o 00:06:23.967 LIB libspdk_accel.a 00:06:23.967 CC lib/event/reactor.o 00:06:23.967 CC lib/event/log_rpc.o 00:06:23.967 SO libspdk_accel.so.16.0 00:06:24.226 SYMLINK libspdk_virtio.so 00:06:24.226 CC lib/event/app_rpc.o 00:06:24.226 CC lib/event/scheduler_static.o 00:06:24.226 SYMLINK libspdk_accel.so 00:06:24.226 LIB libspdk_nvme.a 00:06:24.484 CC lib/bdev/bdev.o 00:06:24.484 CC lib/bdev/bdev_rpc.o 00:06:24.484 CC lib/bdev/part.o 00:06:24.484 CC lib/bdev/bdev_zone.o 00:06:24.484 CC lib/bdev/scsi_nvme.o 00:06:24.484 LIB libspdk_fsdev.a 00:06:24.743 SO libspdk_nvme.so.15.0 00:06:24.743 SO libspdk_fsdev.so.2.0 00:06:24.743 LIB libspdk_event.a 00:06:24.743 SYMLINK libspdk_fsdev.so 00:06:24.743 SO libspdk_event.so.14.0 00:06:24.743 SYMLINK libspdk_event.so 00:06:25.001 SYMLINK libspdk_nvme.so 00:06:25.001 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:25.935 LIB libspdk_fuse_dispatcher.a 00:06:25.935 SO libspdk_fuse_dispatcher.so.1.0 00:06:25.935 SYMLINK libspdk_fuse_dispatcher.so 00:06:27.835 LIB libspdk_blob.a 00:06:27.835 SO libspdk_blob.so.11.0 00:06:28.094 LIB libspdk_bdev.a 00:06:28.094 SYMLINK libspdk_blob.so 00:06:28.094 SO libspdk_bdev.so.17.0 00:06:28.352 SYMLINK libspdk_bdev.so 00:06:28.352 CC lib/blobfs/blobfs.o 00:06:28.352 CC lib/blobfs/tree.o 00:06:28.352 CC lib/lvol/lvol.o 00:06:28.352 CC lib/nbd/nbd.o 00:06:28.352 CC lib/nbd/nbd_rpc.o 00:06:28.352 CC lib/nvmf/ctrlr.o 00:06:28.352 CC lib/scsi/lun.o 00:06:28.352 CC lib/scsi/dev.o 00:06:28.352 CC lib/ftl/ftl_core.o 00:06:28.352 CC lib/ublk/ublk.o 00:06:28.609 CC lib/ftl/ftl_init.o 00:06:28.609 CC lib/ftl/ftl_layout.o 00:06:28.867 CC lib/scsi/port.o 00:06:28.867 CC lib/ublk/ublk_rpc.o 00:06:28.867 CC lib/scsi/scsi.o 00:06:28.867 CC lib/scsi/scsi_bdev.o 00:06:28.867 CC lib/scsi/scsi_pr.o 00:06:28.867 LIB libspdk_nbd.a 00:06:28.867 CC lib/scsi/scsi_rpc.o 00:06:29.125 CC lib/nvmf/ctrlr_discovery.o 00:06:29.125 SO libspdk_nbd.so.7.0 00:06:29.125 SYMLINK libspdk_nbd.so 00:06:29.125 CC lib/nvmf/ctrlr_bdev.o 00:06:29.125 CC lib/ftl/ftl_debug.o 00:06:29.125 CC lib/nvmf/subsystem.o 00:06:29.388 LIB libspdk_ublk.a 00:06:29.388 SO libspdk_ublk.so.3.0 00:06:29.388 CC lib/ftl/ftl_io.o 00:06:29.388 CC lib/ftl/ftl_sb.o 00:06:29.388 LIB libspdk_blobfs.a 00:06:29.388 SO libspdk_blobfs.so.10.0 00:06:29.388 SYMLINK libspdk_ublk.so 00:06:29.388 CC lib/nvmf/nvmf.o 00:06:29.388 LIB libspdk_lvol.a 00:06:29.388 SYMLINK libspdk_blobfs.so 00:06:29.388 CC lib/nvmf/nvmf_rpc.o 00:06:29.388 SO libspdk_lvol.so.10.0 00:06:29.650 CC lib/scsi/task.o 00:06:29.650 SYMLINK libspdk_lvol.so 00:06:29.650 CC lib/nvmf/transport.o 00:06:29.650 CC lib/ftl/ftl_l2p.o 00:06:29.650 CC lib/nvmf/tcp.o 00:06:29.650 CC lib/ftl/ftl_l2p_flat.o 00:06:29.908 LIB libspdk_scsi.a 00:06:29.908 CC lib/ftl/ftl_nv_cache.o 00:06:29.908 SO libspdk_scsi.so.9.0 00:06:29.908 CC lib/nvmf/stubs.o 00:06:29.908 SYMLINK libspdk_scsi.so 00:06:29.908 CC lib/nvmf/mdns_server.o 00:06:29.908 CC lib/nvmf/rdma.o 00:06:30.473 CC lib/ftl/ftl_band.o 00:06:30.473 CC lib/nvmf/auth.o 00:06:30.473 CC lib/ftl/ftl_band_ops.o 00:06:30.731 CC lib/iscsi/conn.o 00:06:30.731 CC lib/vhost/vhost.o 00:06:30.731 CC lib/iscsi/init_grp.o 00:06:30.731 CC lib/iscsi/iscsi.o 00:06:30.988 CC lib/iscsi/param.o 00:06:30.988 CC lib/ftl/ftl_writer.o 00:06:30.988 CC lib/ftl/ftl_rq.o 00:06:30.988 CC lib/iscsi/portal_grp.o 00:06:31.246 CC lib/iscsi/tgt_node.o 00:06:31.246 CC lib/ftl/ftl_reloc.o 00:06:31.246 CC lib/ftl/ftl_l2p_cache.o 00:06:31.551 CC lib/ftl/ftl_p2l.o 00:06:31.551 CC lib/iscsi/iscsi_subsystem.o 00:06:31.551 CC lib/iscsi/iscsi_rpc.o 00:06:31.551 CC lib/iscsi/task.o 00:06:31.809 CC lib/vhost/vhost_rpc.o 00:06:31.809 CC lib/vhost/vhost_scsi.o 00:06:31.809 CC lib/vhost/vhost_blk.o 00:06:31.809 CC lib/ftl/ftl_p2l_log.o 00:06:31.809 CC lib/ftl/mngt/ftl_mngt.o 00:06:32.065 CC lib/vhost/rte_vhost_user.o 00:06:32.065 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:32.065 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:32.323 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:32.323 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:32.323 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:32.581 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:32.581 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:32.581 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:32.581 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:32.581 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:32.581 LIB libspdk_iscsi.a 00:06:32.581 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:32.581 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:32.840 SO libspdk_iscsi.so.8.0 00:06:32.840 CC lib/ftl/utils/ftl_conf.o 00:06:32.840 CC lib/ftl/utils/ftl_md.o 00:06:32.840 CC lib/ftl/utils/ftl_mempool.o 00:06:32.840 CC lib/ftl/utils/ftl_bitmap.o 00:06:32.840 SYMLINK libspdk_iscsi.so 00:06:32.840 CC lib/ftl/utils/ftl_property.o 00:06:32.840 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:32.840 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:33.098 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:33.098 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:33.098 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:33.098 LIB libspdk_nvmf.a 00:06:33.098 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:33.098 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:33.098 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:33.098 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:33.098 LIB libspdk_vhost.a 00:06:33.357 SO libspdk_vhost.so.8.0 00:06:33.357 SO libspdk_nvmf.so.20.0 00:06:33.357 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:33.357 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:33.357 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:33.357 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:33.357 CC lib/ftl/base/ftl_base_dev.o 00:06:33.357 CC lib/ftl/base/ftl_base_bdev.o 00:06:33.357 SYMLINK libspdk_vhost.so 00:06:33.357 CC lib/ftl/ftl_trace.o 00:06:33.615 SYMLINK libspdk_nvmf.so 00:06:33.874 LIB libspdk_ftl.a 00:06:34.133 SO libspdk_ftl.so.9.0 00:06:34.391 SYMLINK libspdk_ftl.so 00:06:34.650 CC module/env_dpdk/env_dpdk_rpc.o 00:06:34.650 CC module/keyring/file/keyring.o 00:06:34.650 CC module/accel/error/accel_error.o 00:06:34.650 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:34.909 CC module/fsdev/aio/fsdev_aio.o 00:06:34.909 CC module/sock/posix/posix.o 00:06:34.909 CC module/accel/dsa/accel_dsa.o 00:06:34.909 CC module/blob/bdev/blob_bdev.o 00:06:34.909 CC module/accel/ioat/accel_ioat.o 00:06:34.909 CC module/keyring/linux/keyring.o 00:06:34.909 LIB libspdk_env_dpdk_rpc.a 00:06:34.909 SO libspdk_env_dpdk_rpc.so.6.0 00:06:34.909 CC module/keyring/file/keyring_rpc.o 00:06:34.909 SYMLINK libspdk_env_dpdk_rpc.so 00:06:34.909 CC module/accel/ioat/accel_ioat_rpc.o 00:06:34.909 CC module/keyring/linux/keyring_rpc.o 00:06:34.909 CC module/accel/error/accel_error_rpc.o 00:06:35.168 CC module/accel/dsa/accel_dsa_rpc.o 00:06:35.168 LIB libspdk_scheduler_dynamic.a 00:06:35.168 LIB libspdk_keyring_file.a 00:06:35.168 LIB libspdk_accel_ioat.a 00:06:35.168 SO libspdk_scheduler_dynamic.so.4.0 00:06:35.168 LIB libspdk_keyring_linux.a 00:06:35.168 SO libspdk_keyring_file.so.2.0 00:06:35.168 SO libspdk_accel_ioat.so.6.0 00:06:35.168 SO libspdk_keyring_linux.so.1.0 00:06:35.168 LIB libspdk_blob_bdev.a 00:06:35.168 SYMLINK libspdk_scheduler_dynamic.so 00:06:35.168 LIB libspdk_accel_error.a 00:06:35.168 SO libspdk_blob_bdev.so.11.0 00:06:35.168 SYMLINK libspdk_accel_ioat.so 00:06:35.168 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:35.168 SYMLINK libspdk_keyring_file.so 00:06:35.168 CC module/fsdev/aio/linux_aio_mgr.o 00:06:35.168 LIB libspdk_accel_dsa.a 00:06:35.168 SO libspdk_accel_error.so.2.0 00:06:35.168 SYMLINK libspdk_keyring_linux.so 00:06:35.168 SO libspdk_accel_dsa.so.5.0 00:06:35.168 SYMLINK libspdk_blob_bdev.so 00:06:35.168 SYMLINK libspdk_accel_error.so 00:06:35.426 SYMLINK libspdk_accel_dsa.so 00:06:35.426 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:35.426 CC module/accel/iaa/accel_iaa.o 00:06:35.426 CC module/accel/iaa/accel_iaa_rpc.o 00:06:35.426 CC module/scheduler/gscheduler/gscheduler.o 00:06:35.426 LIB libspdk_scheduler_dpdk_governor.a 00:06:35.426 CC module/bdev/delay/vbdev_delay.o 00:06:35.426 CC module/bdev/error/vbdev_error.o 00:06:35.426 CC module/blobfs/bdev/blobfs_bdev.o 00:06:35.426 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:35.685 LIB libspdk_accel_iaa.a 00:06:35.685 CC module/bdev/gpt/gpt.o 00:06:35.685 LIB libspdk_scheduler_gscheduler.a 00:06:35.685 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:35.685 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:35.685 SO libspdk_accel_iaa.so.3.0 00:06:35.685 SO libspdk_scheduler_gscheduler.so.4.0 00:06:35.685 CC module/bdev/lvol/vbdev_lvol.o 00:06:35.685 LIB libspdk_fsdev_aio.a 00:06:35.685 SYMLINK libspdk_scheduler_gscheduler.so 00:06:35.685 SYMLINK libspdk_accel_iaa.so 00:06:35.685 SO libspdk_fsdev_aio.so.1.0 00:06:35.685 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:35.685 LIB libspdk_sock_posix.a 00:06:35.685 CC module/bdev/gpt/vbdev_gpt.o 00:06:35.685 SYMLINK libspdk_fsdev_aio.so 00:06:35.685 SO libspdk_sock_posix.so.6.0 00:06:35.943 LIB libspdk_blobfs_bdev.a 00:06:35.943 CC module/bdev/error/vbdev_error_rpc.o 00:06:35.943 SO libspdk_blobfs_bdev.so.6.0 00:06:35.943 SYMLINK libspdk_sock_posix.so 00:06:35.943 CC module/bdev/malloc/bdev_malloc.o 00:06:35.943 SYMLINK libspdk_blobfs_bdev.so 00:06:35.943 LIB libspdk_bdev_delay.a 00:06:35.943 CC module/bdev/null/bdev_null.o 00:06:35.943 CC module/bdev/nvme/bdev_nvme.o 00:06:35.943 SO libspdk_bdev_delay.so.6.0 00:06:36.202 CC module/bdev/passthru/vbdev_passthru.o 00:06:36.202 LIB libspdk_bdev_gpt.a 00:06:36.202 CC module/bdev/raid/bdev_raid.o 00:06:36.202 LIB libspdk_bdev_error.a 00:06:36.202 SYMLINK libspdk_bdev_delay.so 00:06:36.202 SO libspdk_bdev_gpt.so.6.0 00:06:36.202 CC module/bdev/split/vbdev_split.o 00:06:36.202 CC module/bdev/split/vbdev_split_rpc.o 00:06:36.202 SO libspdk_bdev_error.so.6.0 00:06:36.202 SYMLINK libspdk_bdev_gpt.so 00:06:36.202 SYMLINK libspdk_bdev_error.so 00:06:36.202 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:36.462 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:36.462 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:36.462 CC module/bdev/null/bdev_null_rpc.o 00:06:36.462 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:36.462 LIB libspdk_bdev_split.a 00:06:36.462 SO libspdk_bdev_split.so.6.0 00:06:36.462 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:36.462 LIB libspdk_bdev_malloc.a 00:06:36.462 SO libspdk_bdev_malloc.so.6.0 00:06:36.462 CC module/bdev/aio/bdev_aio.o 00:06:36.462 SYMLINK libspdk_bdev_split.so 00:06:36.462 SYMLINK libspdk_bdev_malloc.so 00:06:36.801 CC module/bdev/aio/bdev_aio_rpc.o 00:06:36.801 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:36.801 LIB libspdk_bdev_null.a 00:06:36.801 SO libspdk_bdev_null.so.6.0 00:06:36.801 LIB libspdk_bdev_passthru.a 00:06:36.801 CC module/bdev/ftl/bdev_ftl.o 00:06:36.801 SYMLINK libspdk_bdev_null.so 00:06:36.801 SO libspdk_bdev_passthru.so.6.0 00:06:36.801 LIB libspdk_bdev_zone_block.a 00:06:36.801 SO libspdk_bdev_zone_block.so.6.0 00:06:36.801 LIB libspdk_bdev_lvol.a 00:06:36.801 SYMLINK libspdk_bdev_passthru.so 00:06:36.801 CC module/bdev/nvme/nvme_rpc.o 00:06:36.801 SO libspdk_bdev_lvol.so.6.0 00:06:36.802 SYMLINK libspdk_bdev_zone_block.so 00:06:37.078 CC module/bdev/iscsi/bdev_iscsi.o 00:06:37.078 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:37.078 LIB libspdk_bdev_aio.a 00:06:37.078 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:37.078 SYMLINK libspdk_bdev_lvol.so 00:06:37.078 CC module/bdev/nvme/bdev_mdns_client.o 00:06:37.078 SO libspdk_bdev_aio.so.6.0 00:06:37.078 CC module/bdev/nvme/vbdev_opal.o 00:06:37.078 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:37.078 CC module/bdev/raid/bdev_raid_rpc.o 00:06:37.078 SYMLINK libspdk_bdev_aio.so 00:06:37.078 CC module/bdev/raid/bdev_raid_sb.o 00:06:37.078 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:37.336 LIB libspdk_bdev_ftl.a 00:06:37.336 LIB libspdk_bdev_iscsi.a 00:06:37.336 SO libspdk_bdev_ftl.so.6.0 00:06:37.336 SO libspdk_bdev_iscsi.so.6.0 00:06:37.336 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:37.336 CC module/bdev/raid/raid0.o 00:06:37.595 SYMLINK libspdk_bdev_ftl.so 00:06:37.595 SYMLINK libspdk_bdev_iscsi.so 00:06:37.595 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:37.595 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:37.595 CC module/bdev/raid/raid1.o 00:06:37.595 CC module/bdev/raid/concat.o 00:06:37.595 CC module/bdev/raid/raid5f.o 00:06:37.853 LIB libspdk_bdev_virtio.a 00:06:37.853 SO libspdk_bdev_virtio.so.6.0 00:06:38.112 SYMLINK libspdk_bdev_virtio.so 00:06:38.112 LIB libspdk_bdev_raid.a 00:06:38.370 SO libspdk_bdev_raid.so.6.0 00:06:38.370 SYMLINK libspdk_bdev_raid.so 00:06:39.746 LIB libspdk_bdev_nvme.a 00:06:39.746 SO libspdk_bdev_nvme.so.7.1 00:06:40.004 SYMLINK libspdk_bdev_nvme.so 00:06:40.570 CC module/event/subsystems/sock/sock.o 00:06:40.570 CC module/event/subsystems/vmd/vmd.o 00:06:40.570 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:40.570 CC module/event/subsystems/fsdev/fsdev.o 00:06:40.570 CC module/event/subsystems/keyring/keyring.o 00:06:40.570 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:40.570 CC module/event/subsystems/scheduler/scheduler.o 00:06:40.570 CC module/event/subsystems/iobuf/iobuf.o 00:06:40.570 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:40.570 LIB libspdk_event_vhost_blk.a 00:06:40.570 LIB libspdk_event_sock.a 00:06:40.570 LIB libspdk_event_keyring.a 00:06:40.570 SO libspdk_event_vhost_blk.so.3.0 00:06:40.570 SO libspdk_event_sock.so.5.0 00:06:40.570 LIB libspdk_event_vmd.a 00:06:40.570 LIB libspdk_event_fsdev.a 00:06:40.570 LIB libspdk_event_scheduler.a 00:06:40.828 SO libspdk_event_keyring.so.1.0 00:06:40.828 SO libspdk_event_fsdev.so.1.0 00:06:40.828 SO libspdk_event_vmd.so.6.0 00:06:40.828 SO libspdk_event_scheduler.so.4.0 00:06:40.828 SYMLINK libspdk_event_sock.so 00:06:40.828 LIB libspdk_event_iobuf.a 00:06:40.828 SYMLINK libspdk_event_vhost_blk.so 00:06:40.828 SYMLINK libspdk_event_keyring.so 00:06:40.828 SO libspdk_event_iobuf.so.3.0 00:06:40.828 SYMLINK libspdk_event_fsdev.so 00:06:40.828 SYMLINK libspdk_event_vmd.so 00:06:40.828 SYMLINK libspdk_event_scheduler.so 00:06:40.828 SYMLINK libspdk_event_iobuf.so 00:06:41.086 CC module/event/subsystems/accel/accel.o 00:06:41.344 LIB libspdk_event_accel.a 00:06:41.344 SO libspdk_event_accel.so.6.0 00:06:41.344 SYMLINK libspdk_event_accel.so 00:06:41.602 CC module/event/subsystems/bdev/bdev.o 00:06:41.860 LIB libspdk_event_bdev.a 00:06:41.860 SO libspdk_event_bdev.so.6.0 00:06:41.860 SYMLINK libspdk_event_bdev.so 00:06:42.116 CC module/event/subsystems/ublk/ublk.o 00:06:42.116 CC module/event/subsystems/scsi/scsi.o 00:06:42.116 CC module/event/subsystems/nbd/nbd.o 00:06:42.116 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:42.116 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:42.374 LIB libspdk_event_ublk.a 00:06:42.374 LIB libspdk_event_scsi.a 00:06:42.374 LIB libspdk_event_nbd.a 00:06:42.374 SO libspdk_event_ublk.so.3.0 00:06:42.374 SO libspdk_event_scsi.so.6.0 00:06:42.374 SO libspdk_event_nbd.so.6.0 00:06:42.374 SYMLINK libspdk_event_ublk.so 00:06:42.374 SYMLINK libspdk_event_nbd.so 00:06:42.631 SYMLINK libspdk_event_scsi.so 00:06:42.631 LIB libspdk_event_nvmf.a 00:06:42.631 SO libspdk_event_nvmf.so.6.0 00:06:42.631 SYMLINK libspdk_event_nvmf.so 00:06:42.631 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:42.631 CC module/event/subsystems/iscsi/iscsi.o 00:06:42.889 LIB libspdk_event_vhost_scsi.a 00:06:42.889 SO libspdk_event_vhost_scsi.so.3.0 00:06:42.889 LIB libspdk_event_iscsi.a 00:06:42.889 SO libspdk_event_iscsi.so.6.0 00:06:43.147 SYMLINK libspdk_event_vhost_scsi.so 00:06:43.147 SYMLINK libspdk_event_iscsi.so 00:06:43.147 SO libspdk.so.6.0 00:06:43.147 SYMLINK libspdk.so 00:06:43.406 CC app/spdk_lspci/spdk_lspci.o 00:06:43.406 CC app/trace_record/trace_record.o 00:06:43.406 CXX app/trace/trace.o 00:06:43.406 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:43.664 CC app/nvmf_tgt/nvmf_main.o 00:06:43.664 CC app/iscsi_tgt/iscsi_tgt.o 00:06:43.664 CC app/spdk_tgt/spdk_tgt.o 00:06:43.664 CC test/thread/poller_perf/poller_perf.o 00:06:43.664 CC examples/util/zipf/zipf.o 00:06:43.664 CC examples/ioat/perf/perf.o 00:06:43.664 LINK spdk_lspci 00:06:43.922 LINK nvmf_tgt 00:06:43.923 LINK interrupt_tgt 00:06:43.923 LINK poller_perf 00:06:43.923 LINK zipf 00:06:43.923 LINK iscsi_tgt 00:06:43.923 LINK spdk_tgt 00:06:43.923 LINK spdk_trace_record 00:06:43.923 LINK ioat_perf 00:06:43.923 CC app/spdk_nvme_perf/perf.o 00:06:44.180 LINK spdk_trace 00:06:44.180 TEST_HEADER include/spdk/accel.h 00:06:44.180 TEST_HEADER include/spdk/accel_module.h 00:06:44.180 TEST_HEADER include/spdk/assert.h 00:06:44.180 TEST_HEADER include/spdk/barrier.h 00:06:44.180 TEST_HEADER include/spdk/base64.h 00:06:44.181 TEST_HEADER include/spdk/bdev.h 00:06:44.181 TEST_HEADER include/spdk/bdev_module.h 00:06:44.181 TEST_HEADER include/spdk/bdev_zone.h 00:06:44.181 TEST_HEADER include/spdk/bit_array.h 00:06:44.181 TEST_HEADER include/spdk/bit_pool.h 00:06:44.181 TEST_HEADER include/spdk/blob_bdev.h 00:06:44.181 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:44.181 TEST_HEADER include/spdk/blobfs.h 00:06:44.181 TEST_HEADER include/spdk/blob.h 00:06:44.181 TEST_HEADER include/spdk/conf.h 00:06:44.181 CC app/spdk_nvme_discover/discovery_aer.o 00:06:44.181 TEST_HEADER include/spdk/config.h 00:06:44.181 CC app/spdk_nvme_identify/identify.o 00:06:44.181 TEST_HEADER include/spdk/cpuset.h 00:06:44.181 TEST_HEADER include/spdk/crc16.h 00:06:44.181 TEST_HEADER include/spdk/crc32.h 00:06:44.181 TEST_HEADER include/spdk/crc64.h 00:06:44.181 TEST_HEADER include/spdk/dif.h 00:06:44.181 TEST_HEADER include/spdk/dma.h 00:06:44.181 TEST_HEADER include/spdk/endian.h 00:06:44.181 TEST_HEADER include/spdk/env_dpdk.h 00:06:44.181 TEST_HEADER include/spdk/env.h 00:06:44.181 TEST_HEADER include/spdk/event.h 00:06:44.181 TEST_HEADER include/spdk/fd_group.h 00:06:44.181 TEST_HEADER include/spdk/fd.h 00:06:44.181 TEST_HEADER include/spdk/file.h 00:06:44.181 TEST_HEADER include/spdk/fsdev.h 00:06:44.181 TEST_HEADER include/spdk/fsdev_module.h 00:06:44.181 TEST_HEADER include/spdk/ftl.h 00:06:44.181 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:44.181 TEST_HEADER include/spdk/gpt_spec.h 00:06:44.181 TEST_HEADER include/spdk/hexlify.h 00:06:44.181 TEST_HEADER include/spdk/histogram_data.h 00:06:44.181 CC test/dma/test_dma/test_dma.o 00:06:44.181 TEST_HEADER include/spdk/idxd.h 00:06:44.181 TEST_HEADER include/spdk/idxd_spec.h 00:06:44.181 TEST_HEADER include/spdk/init.h 00:06:44.181 CC app/spdk_top/spdk_top.o 00:06:44.181 TEST_HEADER include/spdk/ioat.h 00:06:44.181 TEST_HEADER include/spdk/ioat_spec.h 00:06:44.181 TEST_HEADER include/spdk/iscsi_spec.h 00:06:44.181 TEST_HEADER include/spdk/json.h 00:06:44.181 TEST_HEADER include/spdk/jsonrpc.h 00:06:44.181 TEST_HEADER include/spdk/keyring.h 00:06:44.181 CC examples/ioat/verify/verify.o 00:06:44.181 TEST_HEADER include/spdk/keyring_module.h 00:06:44.181 TEST_HEADER include/spdk/likely.h 00:06:44.181 TEST_HEADER include/spdk/log.h 00:06:44.181 TEST_HEADER include/spdk/lvol.h 00:06:44.181 TEST_HEADER include/spdk/md5.h 00:06:44.181 TEST_HEADER include/spdk/memory.h 00:06:44.181 TEST_HEADER include/spdk/mmio.h 00:06:44.181 TEST_HEADER include/spdk/nbd.h 00:06:44.181 TEST_HEADER include/spdk/net.h 00:06:44.181 TEST_HEADER include/spdk/notify.h 00:06:44.181 TEST_HEADER include/spdk/nvme.h 00:06:44.181 TEST_HEADER include/spdk/nvme_intel.h 00:06:44.181 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:44.181 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:44.181 CC test/app/bdev_svc/bdev_svc.o 00:06:44.181 TEST_HEADER include/spdk/nvme_spec.h 00:06:44.181 TEST_HEADER include/spdk/nvme_zns.h 00:06:44.181 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:44.181 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:44.181 TEST_HEADER include/spdk/nvmf.h 00:06:44.181 TEST_HEADER include/spdk/nvmf_spec.h 00:06:44.181 TEST_HEADER include/spdk/nvmf_transport.h 00:06:44.181 TEST_HEADER include/spdk/opal.h 00:06:44.181 TEST_HEADER include/spdk/opal_spec.h 00:06:44.181 TEST_HEADER include/spdk/pci_ids.h 00:06:44.181 TEST_HEADER include/spdk/pipe.h 00:06:44.181 TEST_HEADER include/spdk/queue.h 00:06:44.181 TEST_HEADER include/spdk/reduce.h 00:06:44.181 TEST_HEADER include/spdk/rpc.h 00:06:44.181 TEST_HEADER include/spdk/scheduler.h 00:06:44.181 TEST_HEADER include/spdk/scsi.h 00:06:44.181 TEST_HEADER include/spdk/scsi_spec.h 00:06:44.181 TEST_HEADER include/spdk/sock.h 00:06:44.181 TEST_HEADER include/spdk/stdinc.h 00:06:44.181 TEST_HEADER include/spdk/string.h 00:06:44.181 TEST_HEADER include/spdk/thread.h 00:06:44.181 TEST_HEADER include/spdk/trace.h 00:06:44.181 TEST_HEADER include/spdk/trace_parser.h 00:06:44.181 TEST_HEADER include/spdk/tree.h 00:06:44.181 TEST_HEADER include/spdk/ublk.h 00:06:44.181 TEST_HEADER include/spdk/util.h 00:06:44.181 TEST_HEADER include/spdk/uuid.h 00:06:44.181 TEST_HEADER include/spdk/version.h 00:06:44.181 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:44.439 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:44.439 TEST_HEADER include/spdk/vhost.h 00:06:44.439 TEST_HEADER include/spdk/vmd.h 00:06:44.439 CC examples/thread/thread/thread_ex.o 00:06:44.439 TEST_HEADER include/spdk/xor.h 00:06:44.439 TEST_HEADER include/spdk/zipf.h 00:06:44.439 CXX test/cpp_headers/accel.o 00:06:44.439 LINK spdk_nvme_discover 00:06:44.439 CC app/spdk_dd/spdk_dd.o 00:06:44.439 LINK bdev_svc 00:06:44.439 LINK verify 00:06:44.439 CXX test/cpp_headers/accel_module.o 00:06:44.697 CXX test/cpp_headers/assert.o 00:06:44.697 LINK thread 00:06:44.697 CXX test/cpp_headers/barrier.o 00:06:44.697 LINK test_dma 00:06:44.955 CXX test/cpp_headers/base64.o 00:06:44.955 CC test/app/histogram_perf/histogram_perf.o 00:06:44.955 LINK spdk_dd 00:06:44.955 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:44.955 CC app/fio/nvme/fio_plugin.o 00:06:44.955 LINK histogram_perf 00:06:44.955 CXX test/cpp_headers/bdev.o 00:06:44.955 CC examples/sock/hello_world/hello_sock.o 00:06:45.213 LINK spdk_nvme_perf 00:06:45.213 CC app/vhost/vhost.o 00:06:45.213 CXX test/cpp_headers/bdev_module.o 00:06:45.213 CC examples/vmd/lsvmd/lsvmd.o 00:06:45.213 CXX test/cpp_headers/bdev_zone.o 00:06:45.471 CC examples/idxd/perf/perf.o 00:06:45.471 LINK spdk_nvme_identify 00:06:45.471 LINK spdk_top 00:06:45.471 LINK hello_sock 00:06:45.471 LINK vhost 00:06:45.471 LINK lsvmd 00:06:45.471 LINK nvme_fuzz 00:06:45.471 CXX test/cpp_headers/bit_array.o 00:06:45.729 CXX test/cpp_headers/bit_pool.o 00:06:45.729 LINK spdk_nvme 00:06:45.729 CC app/fio/bdev/fio_plugin.o 00:06:45.729 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:45.729 CC examples/accel/perf/accel_perf.o 00:06:45.729 CC examples/vmd/led/led.o 00:06:45.729 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:45.729 CC examples/blob/hello_world/hello_blob.o 00:06:45.729 LINK idxd_perf 00:06:45.729 CXX test/cpp_headers/blob_bdev.o 00:06:45.986 CC examples/nvme/hello_world/hello_world.o 00:06:45.986 LINK led 00:06:45.986 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:45.986 CXX test/cpp_headers/blobfs_bdev.o 00:06:45.986 LINK hello_fsdev 00:06:45.986 LINK hello_blob 00:06:46.244 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:46.244 LINK hello_world 00:06:46.244 CC test/env/mem_callbacks/mem_callbacks.o 00:06:46.244 CXX test/cpp_headers/blobfs.o 00:06:46.244 CC examples/blob/cli/blobcli.o 00:06:46.244 LINK spdk_bdev 00:06:46.501 LINK accel_perf 00:06:46.501 CC examples/nvme/reconnect/reconnect.o 00:06:46.501 CXX test/cpp_headers/blob.o 00:06:46.501 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:46.501 CC test/event/event_perf/event_perf.o 00:06:46.760 CC test/event/reactor/reactor.o 00:06:46.760 CXX test/cpp_headers/conf.o 00:06:46.760 CC test/event/reactor_perf/reactor_perf.o 00:06:46.760 LINK vhost_fuzz 00:06:46.760 LINK reactor 00:06:46.760 LINK event_perf 00:06:47.018 LINK reactor_perf 00:06:47.018 CXX test/cpp_headers/config.o 00:06:47.018 LINK reconnect 00:06:47.018 LINK blobcli 00:06:47.018 CXX test/cpp_headers/cpuset.o 00:06:47.018 LINK mem_callbacks 00:06:47.018 CC examples/nvme/arbitration/arbitration.o 00:06:47.018 CC test/event/app_repeat/app_repeat.o 00:06:47.018 CC examples/nvme/hotplug/hotplug.o 00:06:47.275 CXX test/cpp_headers/crc16.o 00:06:47.275 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:47.275 LINK nvme_manage 00:06:47.275 LINK app_repeat 00:06:47.275 CC test/env/vtophys/vtophys.o 00:06:47.275 CC test/event/scheduler/scheduler.o 00:06:47.275 CXX test/cpp_headers/crc32.o 00:06:47.533 LINK hotplug 00:06:47.533 LINK cmb_copy 00:06:47.533 CXX test/cpp_headers/crc64.o 00:06:47.533 CC examples/bdev/hello_world/hello_bdev.o 00:06:47.533 LINK vtophys 00:06:47.533 CXX test/cpp_headers/dif.o 00:06:47.533 LINK scheduler 00:06:47.533 LINK arbitration 00:06:47.792 CXX test/cpp_headers/dma.o 00:06:47.792 CC test/rpc_client/rpc_client_test.o 00:06:47.792 CC examples/nvme/abort/abort.o 00:06:47.792 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:47.792 LINK hello_bdev 00:06:47.792 CC test/nvme/aer/aer.o 00:06:47.792 CXX test/cpp_headers/endian.o 00:06:47.792 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:47.792 CXX test/cpp_headers/env_dpdk.o 00:06:48.050 LINK rpc_client_test 00:06:48.050 LINK pmr_persistence 00:06:48.050 LINK env_dpdk_post_init 00:06:48.050 CXX test/cpp_headers/env.o 00:06:48.050 CC test/env/memory/memory_ut.o 00:06:48.050 CC test/env/pci/pci_ut.o 00:06:48.050 CC examples/bdev/bdevperf/bdevperf.o 00:06:48.050 LINK aer 00:06:48.353 CXX test/cpp_headers/event.o 00:06:48.353 LINK iscsi_fuzz 00:06:48.353 LINK abort 00:06:48.353 CC test/accel/dif/dif.o 00:06:48.353 CC test/blobfs/mkfs/mkfs.o 00:06:48.353 CC test/nvme/reset/reset.o 00:06:48.353 CXX test/cpp_headers/fd_group.o 00:06:48.612 CC test/lvol/esnap/esnap.o 00:06:48.612 CC test/app/jsoncat/jsoncat.o 00:06:48.612 CC test/nvme/sgl/sgl.o 00:06:48.612 LINK mkfs 00:06:48.612 CXX test/cpp_headers/fd.o 00:06:48.612 LINK pci_ut 00:06:48.612 LINK jsoncat 00:06:48.871 LINK reset 00:06:48.871 CXX test/cpp_headers/file.o 00:06:48.871 LINK sgl 00:06:48.871 CC test/nvme/e2edp/nvme_dp.o 00:06:48.871 CC test/app/stub/stub.o 00:06:49.129 CXX test/cpp_headers/fsdev.o 00:06:49.129 CC test/nvme/overhead/overhead.o 00:06:49.129 CC test/nvme/err_injection/err_injection.o 00:06:49.129 LINK bdevperf 00:06:49.129 CC test/nvme/startup/startup.o 00:06:49.129 LINK stub 00:06:49.129 CXX test/cpp_headers/fsdev_module.o 00:06:49.388 LINK nvme_dp 00:06:49.388 LINK dif 00:06:49.388 CXX test/cpp_headers/ftl.o 00:06:49.388 LINK overhead 00:06:49.388 LINK err_injection 00:06:49.388 LINK startup 00:06:49.388 CXX test/cpp_headers/fuse_dispatcher.o 00:06:49.388 LINK memory_ut 00:06:49.646 CXX test/cpp_headers/gpt_spec.o 00:06:49.646 CC test/nvme/simple_copy/simple_copy.o 00:06:49.646 CC test/nvme/reserve/reserve.o 00:06:49.646 CXX test/cpp_headers/hexlify.o 00:06:49.646 CC examples/nvmf/nvmf/nvmf.o 00:06:49.646 CC test/nvme/connect_stress/connect_stress.o 00:06:49.646 CC test/nvme/boot_partition/boot_partition.o 00:06:49.646 CC test/nvme/compliance/nvme_compliance.o 00:06:49.646 CXX test/cpp_headers/histogram_data.o 00:06:49.904 CXX test/cpp_headers/idxd.o 00:06:49.904 CC test/bdev/bdevio/bdevio.o 00:06:49.904 LINK reserve 00:06:49.904 LINK simple_copy 00:06:49.904 LINK boot_partition 00:06:49.904 LINK connect_stress 00:06:49.904 CXX test/cpp_headers/idxd_spec.o 00:06:49.904 CXX test/cpp_headers/init.o 00:06:50.162 LINK nvmf 00:06:50.162 CC test/nvme/fused_ordering/fused_ordering.o 00:06:50.162 CXX test/cpp_headers/ioat.o 00:06:50.162 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:50.162 CC test/nvme/cuse/cuse.o 00:06:50.162 CC test/nvme/fdp/fdp.o 00:06:50.162 CXX test/cpp_headers/ioat_spec.o 00:06:50.162 LINK bdevio 00:06:50.162 LINK nvme_compliance 00:06:50.419 CXX test/cpp_headers/iscsi_spec.o 00:06:50.419 CXX test/cpp_headers/json.o 00:06:50.419 LINK fused_ordering 00:06:50.419 LINK doorbell_aers 00:06:50.420 CXX test/cpp_headers/jsonrpc.o 00:06:50.420 CXX test/cpp_headers/keyring.o 00:06:50.420 CXX test/cpp_headers/keyring_module.o 00:06:50.420 CXX test/cpp_headers/likely.o 00:06:50.420 CXX test/cpp_headers/log.o 00:06:50.678 CXX test/cpp_headers/lvol.o 00:06:50.678 CXX test/cpp_headers/md5.o 00:06:50.678 CXX test/cpp_headers/memory.o 00:06:50.678 LINK fdp 00:06:50.678 CXX test/cpp_headers/mmio.o 00:06:50.678 CXX test/cpp_headers/nbd.o 00:06:50.678 CXX test/cpp_headers/net.o 00:06:50.678 CXX test/cpp_headers/notify.o 00:06:50.678 CXX test/cpp_headers/nvme.o 00:06:50.678 CXX test/cpp_headers/nvme_intel.o 00:06:50.678 CXX test/cpp_headers/nvme_ocssd.o 00:06:50.937 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:50.937 CXX test/cpp_headers/nvme_spec.o 00:06:50.937 CXX test/cpp_headers/nvme_zns.o 00:06:50.937 CXX test/cpp_headers/nvmf_cmd.o 00:06:50.937 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:50.937 CXX test/cpp_headers/nvmf.o 00:06:50.937 CXX test/cpp_headers/nvmf_spec.o 00:06:50.937 CXX test/cpp_headers/nvmf_transport.o 00:06:50.937 CXX test/cpp_headers/opal.o 00:06:50.937 CXX test/cpp_headers/opal_spec.o 00:06:51.195 CXX test/cpp_headers/pci_ids.o 00:06:51.195 CXX test/cpp_headers/pipe.o 00:06:51.195 CXX test/cpp_headers/queue.o 00:06:51.195 CXX test/cpp_headers/reduce.o 00:06:51.195 CXX test/cpp_headers/rpc.o 00:06:51.195 CXX test/cpp_headers/scheduler.o 00:06:51.195 CXX test/cpp_headers/scsi.o 00:06:51.195 CXX test/cpp_headers/scsi_spec.o 00:06:51.195 CXX test/cpp_headers/sock.o 00:06:51.195 CXX test/cpp_headers/stdinc.o 00:06:51.195 CXX test/cpp_headers/string.o 00:06:51.454 CXX test/cpp_headers/thread.o 00:06:51.454 CXX test/cpp_headers/trace.o 00:06:51.454 CXX test/cpp_headers/trace_parser.o 00:06:51.454 CXX test/cpp_headers/tree.o 00:06:51.454 CXX test/cpp_headers/ublk.o 00:06:51.454 CXX test/cpp_headers/util.o 00:06:51.454 CXX test/cpp_headers/uuid.o 00:06:51.454 CXX test/cpp_headers/version.o 00:06:51.454 CXX test/cpp_headers/vfio_user_pci.o 00:06:51.454 CXX test/cpp_headers/vfio_user_spec.o 00:06:51.454 CXX test/cpp_headers/vhost.o 00:06:51.454 CXX test/cpp_headers/vmd.o 00:06:51.712 CXX test/cpp_headers/xor.o 00:06:51.712 CXX test/cpp_headers/zipf.o 00:06:51.970 LINK cuse 00:06:56.155 LINK esnap 00:06:56.414 ************************************ 00:06:56.414 END TEST make 00:06:56.414 ************************************ 00:06:56.414 00:06:56.414 real 1m36.475s 00:06:56.414 user 8m54.088s 00:06:56.414 sys 1m42.840s 00:06:56.414 07:03:53 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:56.414 07:03:53 make -- common/autotest_common.sh@10 -- $ set +x 00:06:56.414 07:03:53 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:56.414 07:03:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:56.414 07:03:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:56.414 07:03:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:56.414 07:03:53 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:56.414 07:03:53 -- pm/common@44 -- $ pid=5253 00:06:56.414 07:03:53 -- pm/common@50 -- $ kill -TERM 5253 00:06:56.414 07:03:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:56.414 07:03:53 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:56.414 07:03:53 -- pm/common@44 -- $ pid=5254 00:06:56.414 07:03:53 -- pm/common@50 -- $ kill -TERM 5254 00:06:56.414 07:03:53 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:56.414 07:03:53 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:56.414 07:03:53 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.414 07:03:53 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.414 07:03:53 -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.673 07:03:53 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.673 07:03:53 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.673 07:03:53 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.673 07:03:53 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.673 07:03:53 -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.673 07:03:53 -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.673 07:03:53 -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.673 07:03:53 -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.673 07:03:53 -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.673 07:03:53 -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.673 07:03:53 -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.673 07:03:53 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.673 07:03:53 -- scripts/common.sh@344 -- # case "$op" in 00:06:56.673 07:03:53 -- scripts/common.sh@345 -- # : 1 00:06:56.673 07:03:53 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.673 07:03:53 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.673 07:03:53 -- scripts/common.sh@365 -- # decimal 1 00:06:56.673 07:03:53 -- scripts/common.sh@353 -- # local d=1 00:06:56.673 07:03:53 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.673 07:03:53 -- scripts/common.sh@355 -- # echo 1 00:06:56.673 07:03:53 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.673 07:03:53 -- scripts/common.sh@366 -- # decimal 2 00:06:56.673 07:03:53 -- scripts/common.sh@353 -- # local d=2 00:06:56.673 07:03:53 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.673 07:03:53 -- scripts/common.sh@355 -- # echo 2 00:06:56.673 07:03:53 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.673 07:03:53 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.673 07:03:53 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.673 07:03:53 -- scripts/common.sh@368 -- # return 0 00:06:56.673 07:03:53 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.673 07:03:53 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.673 --rc genhtml_branch_coverage=1 00:06:56.673 --rc genhtml_function_coverage=1 00:06:56.673 --rc genhtml_legend=1 00:06:56.673 --rc geninfo_all_blocks=1 00:06:56.673 --rc geninfo_unexecuted_blocks=1 00:06:56.673 00:06:56.673 ' 00:06:56.673 07:03:53 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.673 --rc genhtml_branch_coverage=1 00:06:56.673 --rc genhtml_function_coverage=1 00:06:56.673 --rc genhtml_legend=1 00:06:56.673 --rc geninfo_all_blocks=1 00:06:56.673 --rc geninfo_unexecuted_blocks=1 00:06:56.673 00:06:56.673 ' 00:06:56.673 07:03:53 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.673 --rc genhtml_branch_coverage=1 00:06:56.673 --rc genhtml_function_coverage=1 00:06:56.673 --rc genhtml_legend=1 00:06:56.673 --rc geninfo_all_blocks=1 00:06:56.673 --rc geninfo_unexecuted_blocks=1 00:06:56.673 00:06:56.673 ' 00:06:56.673 07:03:53 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.673 --rc genhtml_branch_coverage=1 00:06:56.673 --rc genhtml_function_coverage=1 00:06:56.673 --rc genhtml_legend=1 00:06:56.673 --rc geninfo_all_blocks=1 00:06:56.673 --rc geninfo_unexecuted_blocks=1 00:06:56.673 00:06:56.673 ' 00:06:56.673 07:03:53 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:56.673 07:03:53 -- nvmf/common.sh@7 -- # uname -s 00:06:56.673 07:03:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.673 07:03:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.673 07:03:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.673 07:03:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.673 07:03:53 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.673 07:03:53 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:56.673 07:03:53 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.673 07:03:53 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:56.673 07:03:53 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e7ce7ac-7390-4a29-9314-9b7b8205f111 00:06:56.673 07:03:53 -- nvmf/common.sh@16 -- # NVME_HOSTID=2e7ce7ac-7390-4a29-9314-9b7b8205f111 00:06:56.673 07:03:53 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.673 07:03:53 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:56.673 07:03:53 -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:06:56.673 07:03:53 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:56.673 07:03:53 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:56.673 07:03:53 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:56.673 07:03:53 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.673 07:03:53 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.673 07:03:53 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.673 07:03:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.673 07:03:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.674 07:03:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.674 07:03:53 -- paths/export.sh@5 -- # export PATH 00:06:56.674 07:03:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.674 07:03:53 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:06:56.674 07:03:53 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:56.674 07:03:53 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:56.674 07:03:53 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:56.674 07:03:53 -- nvmf/common.sh@50 -- # : 0 00:06:56.674 07:03:53 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:56.674 07:03:53 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:56.674 07:03:53 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:56.674 07:03:53 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.674 07:03:53 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.674 07:03:53 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:56.674 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:56.674 07:03:53 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:56.674 07:03:53 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:56.674 07:03:53 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:56.674 07:03:53 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:56.674 07:03:53 -- spdk/autotest.sh@32 -- # uname -s 00:06:56.674 07:03:53 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:56.674 07:03:53 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:56.674 07:03:53 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:56.674 07:03:53 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:56.674 07:03:53 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:56.674 07:03:53 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:56.674 07:03:53 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:56.674 07:03:53 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:56.674 07:03:53 -- spdk/autotest.sh@48 -- # udevadm_pid=54306 00:06:56.674 07:03:53 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:56.674 07:03:53 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:56.674 07:03:53 -- pm/common@17 -- # local monitor 00:06:56.674 07:03:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:56.674 07:03:53 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:56.674 07:03:53 -- pm/common@25 -- # sleep 1 00:06:56.674 07:03:53 -- pm/common@21 -- # date +%s 00:06:56.674 07:03:53 -- pm/common@21 -- # date +%s 00:06:56.674 07:03:53 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732086233 00:06:56.674 07:03:53 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732086233 00:06:56.674 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732086233_collect-vmstat.pm.log 00:06:56.674 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732086233_collect-cpu-load.pm.log 00:06:57.634 07:03:54 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:57.634 07:03:54 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:57.634 07:03:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:57.634 07:03:54 -- common/autotest_common.sh@10 -- # set +x 00:06:57.634 07:03:54 -- spdk/autotest.sh@59 -- # create_test_list 00:06:57.634 07:03:54 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:57.634 07:03:54 -- common/autotest_common.sh@10 -- # set +x 00:06:57.634 07:03:54 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:57.634 07:03:54 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:57.634 07:03:54 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:57.634 07:03:54 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:57.634 07:03:54 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:57.634 07:03:54 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:57.634 07:03:54 -- common/autotest_common.sh@1457 -- # uname 00:06:57.634 07:03:54 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:57.634 07:03:54 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:57.634 07:03:54 -- common/autotest_common.sh@1477 -- # uname 00:06:57.634 07:03:54 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:57.634 07:03:54 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:57.634 07:03:54 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:57.893 lcov: LCOV version 1.15 00:06:57.893 07:03:54 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:16.070 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:16.070 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:34.153 07:04:29 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:34.153 07:04:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.153 07:04:29 -- common/autotest_common.sh@10 -- # set +x 00:07:34.153 07:04:29 -- spdk/autotest.sh@78 -- # rm -f 00:07:34.153 07:04:29 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:34.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:34.153 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:34.153 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:34.153 07:04:30 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:34.153 07:04:30 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:34.153 07:04:30 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:34.153 07:04:30 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:34.153 07:04:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.153 07:04:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:34.153 07:04:30 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:34.153 07:04:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:34.153 07:04:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.153 07:04:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.153 07:04:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:34.153 07:04:30 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:34.153 07:04:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:34.153 07:04:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.153 07:04:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.153 07:04:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:07:34.153 07:04:30 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:07:34.153 07:04:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:07:34.153 07:04:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.153 07:04:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:34.153 07:04:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:07:34.153 07:04:30 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:07:34.153 07:04:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:07:34.153 07:04:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:34.153 07:04:30 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:34.153 07:04:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:34.153 07:04:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:34.153 07:04:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:34.153 07:04:30 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:34.153 07:04:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:34.153 No valid GPT data, bailing 00:07:34.153 07:04:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:34.153 07:04:30 -- scripts/common.sh@394 -- # pt= 00:07:34.153 07:04:30 -- scripts/common.sh@395 -- # return 1 00:07:34.153 07:04:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:34.153 1+0 records in 00:07:34.153 1+0 records out 00:07:34.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00515048 s, 204 MB/s 00:07:34.153 07:04:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:34.153 07:04:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:34.153 07:04:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:34.153 07:04:30 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:34.153 07:04:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:34.153 No valid GPT data, bailing 00:07:34.153 07:04:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:34.153 07:04:30 -- scripts/common.sh@394 -- # pt= 00:07:34.153 07:04:30 -- scripts/common.sh@395 -- # return 1 00:07:34.153 07:04:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:34.153 1+0 records in 00:07:34.153 1+0 records out 00:07:34.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461724 s, 227 MB/s 00:07:34.153 07:04:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:34.153 07:04:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:34.153 07:04:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:07:34.153 07:04:30 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:07:34.153 07:04:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:07:34.153 No valid GPT data, bailing 00:07:34.154 07:04:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:07:34.154 07:04:30 -- scripts/common.sh@394 -- # pt= 00:07:34.154 07:04:30 -- scripts/common.sh@395 -- # return 1 00:07:34.154 07:04:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:07:34.154 1+0 records in 00:07:34.154 1+0 records out 00:07:34.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00457402 s, 229 MB/s 00:07:34.154 07:04:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:34.154 07:04:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:34.154 07:04:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:07:34.154 07:04:30 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:07:34.154 07:04:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:07:34.154 No valid GPT data, bailing 00:07:34.154 07:04:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:07:34.154 07:04:30 -- scripts/common.sh@394 -- # pt= 00:07:34.154 07:04:30 -- scripts/common.sh@395 -- # return 1 00:07:34.154 07:04:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:07:34.154 1+0 records in 00:07:34.154 1+0 records out 00:07:34.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440289 s, 238 MB/s 00:07:34.154 07:04:30 -- spdk/autotest.sh@105 -- # sync 00:07:34.154 07:04:30 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:34.154 07:04:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:34.154 07:04:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:35.528 07:04:32 -- spdk/autotest.sh@111 -- # uname -s 00:07:35.528 07:04:32 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:35.528 07:04:32 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:35.528 07:04:32 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:35.786 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:35.786 Hugepages 00:07:35.786 node hugesize free / total 00:07:35.786 node0 1048576kB 0 / 0 00:07:35.786 node0 2048kB 0 / 0 00:07:35.786 00:07:35.786 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:36.043 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:36.043 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:36.043 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:07:36.043 07:04:33 -- spdk/autotest.sh@117 -- # uname -s 00:07:36.043 07:04:33 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:36.043 07:04:33 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:36.043 07:04:33 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:36.609 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:36.867 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.867 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:36.867 07:04:34 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:37.820 07:04:35 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:37.820 07:04:35 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:37.820 07:04:35 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:37.820 07:04:35 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:37.820 07:04:35 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:37.820 07:04:35 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:37.820 07:04:35 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:37.820 07:04:35 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:37.820 07:04:35 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:37.820 07:04:35 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:07:37.820 07:04:35 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:37.820 07:04:35 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:38.386 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:38.386 Waiting for block devices as requested 00:07:38.386 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.386 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.386 07:04:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:38.386 07:04:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:38.386 07:04:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:38.386 07:04:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:38.386 07:04:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:38.386 07:04:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:38.386 07:04:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:38.386 07:04:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:38.386 07:04:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:38.386 07:04:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:38.386 07:04:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:38.386 07:04:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:38.386 07:04:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:38.386 07:04:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:38.386 07:04:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:38.386 07:04:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:38.386 07:04:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:38.386 07:04:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:38.386 07:04:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:38.386 07:04:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:38.386 07:04:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:38.386 07:04:35 -- common/autotest_common.sh@1543 -- # continue 00:07:38.386 07:04:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:38.386 07:04:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:38.645 07:04:35 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:38.645 07:04:35 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:07:38.645 07:04:35 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:38.645 07:04:35 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:38.645 07:04:35 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:38.645 07:04:35 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:38.645 07:04:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:38.645 07:04:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:38.645 07:04:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:38.645 07:04:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:38.645 07:04:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:38.645 07:04:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:38.645 07:04:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:38.645 07:04:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:38.645 07:04:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:38.645 07:04:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:38.645 07:04:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:38.645 07:04:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:38.645 07:04:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:38.645 07:04:35 -- common/autotest_common.sh@1543 -- # continue 00:07:38.645 07:04:35 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:38.645 07:04:35 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.645 07:04:35 -- common/autotest_common.sh@10 -- # set +x 00:07:38.645 07:04:35 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:38.645 07:04:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.645 07:04:35 -- common/autotest_common.sh@10 -- # set +x 00:07:38.645 07:04:35 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:39.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.291 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.291 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.291 07:04:36 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:39.291 07:04:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:39.291 07:04:36 -- common/autotest_common.sh@10 -- # set +x 00:07:39.549 07:04:36 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:39.549 07:04:36 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:39.549 07:04:36 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:39.549 07:04:36 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:39.549 07:04:36 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:39.549 07:04:36 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:39.549 07:04:36 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:39.549 07:04:36 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:39.549 07:04:36 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:39.549 07:04:36 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:39.549 07:04:36 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:39.549 07:04:36 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:39.549 07:04:36 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:39.549 07:04:36 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:07:39.549 07:04:36 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:39.550 07:04:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:39.550 07:04:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:39.550 07:04:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:39.550 07:04:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:39.550 07:04:36 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:39.550 07:04:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:39.550 07:04:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:39.550 07:04:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:39.550 07:04:36 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:39.550 07:04:36 -- common/autotest_common.sh@1572 -- # return 0 00:07:39.550 07:04:36 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:39.550 07:04:36 -- common/autotest_common.sh@1580 -- # return 0 00:07:39.550 07:04:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:39.550 07:04:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:39.550 07:04:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:39.550 07:04:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:39.550 07:04:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:39.550 07:04:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.550 07:04:36 -- common/autotest_common.sh@10 -- # set +x 00:07:39.550 07:04:36 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:39.550 07:04:36 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:39.550 07:04:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.550 07:04:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.550 07:04:36 -- common/autotest_common.sh@10 -- # set +x 00:07:39.550 ************************************ 00:07:39.550 START TEST env 00:07:39.550 ************************************ 00:07:39.550 07:04:36 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:39.550 * Looking for test storage... 00:07:39.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:39.550 07:04:36 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.550 07:04:36 env -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.550 07:04:36 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.550 07:04:36 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.550 07:04:36 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.550 07:04:36 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.550 07:04:36 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.550 07:04:36 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.550 07:04:36 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.550 07:04:36 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.550 07:04:36 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.550 07:04:36 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.550 07:04:36 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.550 07:04:36 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.550 07:04:36 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.550 07:04:36 env -- scripts/common.sh@344 -- # case "$op" in 00:07:39.550 07:04:36 env -- scripts/common.sh@345 -- # : 1 00:07:39.550 07:04:36 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.550 07:04:36 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.808 07:04:36 env -- scripts/common.sh@365 -- # decimal 1 00:07:39.808 07:04:36 env -- scripts/common.sh@353 -- # local d=1 00:07:39.808 07:04:36 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.808 07:04:36 env -- scripts/common.sh@355 -- # echo 1 00:07:39.808 07:04:36 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.808 07:04:36 env -- scripts/common.sh@366 -- # decimal 2 00:07:39.808 07:04:36 env -- scripts/common.sh@353 -- # local d=2 00:07:39.808 07:04:36 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.808 07:04:36 env -- scripts/common.sh@355 -- # echo 2 00:07:39.808 07:04:36 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.808 07:04:36 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.808 07:04:36 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.808 07:04:36 env -- scripts/common.sh@368 -- # return 0 00:07:39.808 07:04:36 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.808 07:04:36 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.808 --rc genhtml_branch_coverage=1 00:07:39.808 --rc genhtml_function_coverage=1 00:07:39.808 --rc genhtml_legend=1 00:07:39.808 --rc geninfo_all_blocks=1 00:07:39.808 --rc geninfo_unexecuted_blocks=1 00:07:39.808 00:07:39.808 ' 00:07:39.808 07:04:36 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.808 --rc genhtml_branch_coverage=1 00:07:39.808 --rc genhtml_function_coverage=1 00:07:39.808 --rc genhtml_legend=1 00:07:39.808 --rc geninfo_all_blocks=1 00:07:39.808 --rc geninfo_unexecuted_blocks=1 00:07:39.808 00:07:39.808 ' 00:07:39.808 07:04:36 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.808 --rc genhtml_branch_coverage=1 00:07:39.808 --rc genhtml_function_coverage=1 00:07:39.808 --rc genhtml_legend=1 00:07:39.808 --rc geninfo_all_blocks=1 00:07:39.808 --rc geninfo_unexecuted_blocks=1 00:07:39.808 00:07:39.808 ' 00:07:39.808 07:04:36 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.808 --rc genhtml_branch_coverage=1 00:07:39.808 --rc genhtml_function_coverage=1 00:07:39.808 --rc genhtml_legend=1 00:07:39.808 --rc geninfo_all_blocks=1 00:07:39.808 --rc geninfo_unexecuted_blocks=1 00:07:39.808 00:07:39.808 ' 00:07:39.808 07:04:36 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:39.808 07:04:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.808 07:04:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.808 07:04:36 env -- common/autotest_common.sh@10 -- # set +x 00:07:39.808 ************************************ 00:07:39.808 START TEST env_memory 00:07:39.808 ************************************ 00:07:39.808 07:04:36 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:39.808 00:07:39.808 00:07:39.808 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.808 http://cunit.sourceforge.net/ 00:07:39.808 00:07:39.808 00:07:39.808 Suite: memory 00:07:39.808 Test: alloc and free memory map ...[2024-11-20 07:04:36.965680] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:39.808 passed 00:07:39.808 Test: mem map translation ...[2024-11-20 07:04:37.044748] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:39.808 [2024-11-20 07:04:37.044922] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:39.808 [2024-11-20 07:04:37.045053] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:39.808 [2024-11-20 07:04:37.045115] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:39.808 passed 00:07:40.066 Test: mem map registration ...[2024-11-20 07:04:37.129125] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:40.066 [2024-11-20 07:04:37.129238] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:40.066 passed 00:07:40.066 Test: mem map adjacent registrations ...passed 00:07:40.066 00:07:40.066 Run Summary: Type Total Ran Passed Failed Inactive 00:07:40.066 suites 1 1 n/a 0 0 00:07:40.066 tests 4 4 4 0 0 00:07:40.066 asserts 152 152 152 0 n/a 00:07:40.066 00:07:40.066 Elapsed time = 0.325 seconds 00:07:40.066 00:07:40.066 real 0m0.362s 00:07:40.066 user 0m0.330s 00:07:40.066 sys 0m0.026s 00:07:40.066 07:04:37 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.066 07:04:37 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:40.066 ************************************ 00:07:40.066 END TEST env_memory 00:07:40.066 ************************************ 00:07:40.066 07:04:37 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:40.066 07:04:37 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.066 07:04:37 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.066 07:04:37 env -- common/autotest_common.sh@10 -- # set +x 00:07:40.066 ************************************ 00:07:40.066 START TEST env_vtophys 00:07:40.066 ************************************ 00:07:40.066 07:04:37 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:40.066 EAL: lib.eal log level changed from notice to debug 00:07:40.066 EAL: Detected lcore 0 as core 0 on socket 0 00:07:40.066 EAL: Detected lcore 1 as core 0 on socket 0 00:07:40.066 EAL: Detected lcore 2 as core 0 on socket 0 00:07:40.066 EAL: Detected lcore 3 as core 0 on socket 0 00:07:40.066 EAL: Detected lcore 4 as core 0 on socket 0 00:07:40.066 EAL: Detected lcore 5 as core 0 on socket 0 00:07:40.066 EAL: Detected lcore 6 as core 0 on socket 0 00:07:40.066 EAL: Detected lcore 7 as core 0 on socket 0 00:07:40.066 EAL: Detected lcore 8 as core 0 on socket 0 00:07:40.066 EAL: Detected lcore 9 as core 0 on socket 0 00:07:40.066 EAL: Maximum logical cores by configuration: 128 00:07:40.066 EAL: Detected CPU lcores: 10 00:07:40.066 EAL: Detected NUMA nodes: 1 00:07:40.066 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:40.066 EAL: Detected shared linkage of DPDK 00:07:40.066 EAL: No shared files mode enabled, IPC will be disabled 00:07:40.066 EAL: Selected IOVA mode 'PA' 00:07:40.066 EAL: Probing VFIO support... 00:07:40.066 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:40.066 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:40.066 EAL: Ask a virtual area of 0x2e000 bytes 00:07:40.066 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:40.066 EAL: Setting up physically contiguous memory... 00:07:40.066 EAL: Setting maximum number of open files to 524288 00:07:40.066 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:40.066 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:40.066 EAL: Ask a virtual area of 0x61000 bytes 00:07:40.066 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:40.066 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:40.066 EAL: Ask a virtual area of 0x400000000 bytes 00:07:40.066 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:40.066 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:40.066 EAL: Ask a virtual area of 0x61000 bytes 00:07:40.066 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:40.324 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:40.324 EAL: Ask a virtual area of 0x400000000 bytes 00:07:40.324 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:40.324 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:40.324 EAL: Ask a virtual area of 0x61000 bytes 00:07:40.324 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:40.324 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:40.324 EAL: Ask a virtual area of 0x400000000 bytes 00:07:40.324 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:40.324 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:40.324 EAL: Ask a virtual area of 0x61000 bytes 00:07:40.324 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:40.324 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:40.324 EAL: Ask a virtual area of 0x400000000 bytes 00:07:40.324 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:40.324 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:40.324 EAL: Hugepages will be freed exactly as allocated. 00:07:40.324 EAL: No shared files mode enabled, IPC is disabled 00:07:40.324 EAL: No shared files mode enabled, IPC is disabled 00:07:40.324 EAL: TSC frequency is ~2200000 KHz 00:07:40.324 EAL: Main lcore 0 is ready (tid=7f8429d25a40;cpuset=[0]) 00:07:40.324 EAL: Trying to obtain current memory policy. 00:07:40.324 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:40.324 EAL: Restoring previous memory policy: 0 00:07:40.324 EAL: request: mp_malloc_sync 00:07:40.324 EAL: No shared files mode enabled, IPC is disabled 00:07:40.324 EAL: Heap on socket 0 was expanded by 2MB 00:07:40.324 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:40.324 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:40.324 EAL: Mem event callback 'spdk:(nil)' registered 00:07:40.324 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:40.324 00:07:40.324 00:07:40.324 CUnit - A unit testing framework for C - Version 2.1-3 00:07:40.324 http://cunit.sourceforge.net/ 00:07:40.324 00:07:40.324 00:07:40.324 Suite: components_suite 00:07:40.891 Test: vtophys_malloc_test ...passed 00:07:40.891 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:40.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:40.891 EAL: Restoring previous memory policy: 4 00:07:40.891 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.891 EAL: request: mp_malloc_sync 00:07:40.891 EAL: No shared files mode enabled, IPC is disabled 00:07:40.891 EAL: Heap on socket 0 was expanded by 4MB 00:07:40.891 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.891 EAL: request: mp_malloc_sync 00:07:40.891 EAL: No shared files mode enabled, IPC is disabled 00:07:40.891 EAL: Heap on socket 0 was shrunk by 4MB 00:07:40.891 EAL: Trying to obtain current memory policy. 00:07:40.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:40.891 EAL: Restoring previous memory policy: 4 00:07:40.891 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.891 EAL: request: mp_malloc_sync 00:07:40.891 EAL: No shared files mode enabled, IPC is disabled 00:07:40.891 EAL: Heap on socket 0 was expanded by 6MB 00:07:40.891 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.891 EAL: request: mp_malloc_sync 00:07:40.891 EAL: No shared files mode enabled, IPC is disabled 00:07:40.891 EAL: Heap on socket 0 was shrunk by 6MB 00:07:40.891 EAL: Trying to obtain current memory policy. 00:07:40.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:40.891 EAL: Restoring previous memory policy: 4 00:07:40.891 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.891 EAL: request: mp_malloc_sync 00:07:40.891 EAL: No shared files mode enabled, IPC is disabled 00:07:40.891 EAL: Heap on socket 0 was expanded by 10MB 00:07:40.891 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.891 EAL: request: mp_malloc_sync 00:07:40.891 EAL: No shared files mode enabled, IPC is disabled 00:07:40.891 EAL: Heap on socket 0 was shrunk by 10MB 00:07:40.891 EAL: Trying to obtain current memory policy. 00:07:40.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:40.891 EAL: Restoring previous memory policy: 4 00:07:40.891 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.891 EAL: request: mp_malloc_sync 00:07:40.891 EAL: No shared files mode enabled, IPC is disabled 00:07:40.891 EAL: Heap on socket 0 was expanded by 18MB 00:07:40.891 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.891 EAL: request: mp_malloc_sync 00:07:40.891 EAL: No shared files mode enabled, IPC is disabled 00:07:40.891 EAL: Heap on socket 0 was shrunk by 18MB 00:07:40.891 EAL: Trying to obtain current memory policy. 00:07:40.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:40.891 EAL: Restoring previous memory policy: 4 00:07:40.891 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.891 EAL: request: mp_malloc_sync 00:07:40.891 EAL: No shared files mode enabled, IPC is disabled 00:07:40.891 EAL: Heap on socket 0 was expanded by 34MB 00:07:40.891 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.891 EAL: request: mp_malloc_sync 00:07:40.891 EAL: No shared files mode enabled, IPC is disabled 00:07:40.891 EAL: Heap on socket 0 was shrunk by 34MB 00:07:40.891 EAL: Trying to obtain current memory policy. 00:07:40.891 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:41.150 EAL: Restoring previous memory policy: 4 00:07:41.150 EAL: Calling mem event callback 'spdk:(nil)' 00:07:41.150 EAL: request: mp_malloc_sync 00:07:41.150 EAL: No shared files mode enabled, IPC is disabled 00:07:41.150 EAL: Heap on socket 0 was expanded by 66MB 00:07:41.150 EAL: Calling mem event callback 'spdk:(nil)' 00:07:41.150 EAL: request: mp_malloc_sync 00:07:41.150 EAL: No shared files mode enabled, IPC is disabled 00:07:41.150 EAL: Heap on socket 0 was shrunk by 66MB 00:07:41.150 EAL: Trying to obtain current memory policy. 00:07:41.150 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:41.150 EAL: Restoring previous memory policy: 4 00:07:41.150 EAL: Calling mem event callback 'spdk:(nil)' 00:07:41.150 EAL: request: mp_malloc_sync 00:07:41.150 EAL: No shared files mode enabled, IPC is disabled 00:07:41.150 EAL: Heap on socket 0 was expanded by 130MB 00:07:41.412 EAL: Calling mem event callback 'spdk:(nil)' 00:07:41.412 EAL: request: mp_malloc_sync 00:07:41.412 EAL: No shared files mode enabled, IPC is disabled 00:07:41.412 EAL: Heap on socket 0 was shrunk by 130MB 00:07:41.670 EAL: Trying to obtain current memory policy. 00:07:41.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:41.670 EAL: Restoring previous memory policy: 4 00:07:41.670 EAL: Calling mem event callback 'spdk:(nil)' 00:07:41.670 EAL: request: mp_malloc_sync 00:07:41.670 EAL: No shared files mode enabled, IPC is disabled 00:07:41.670 EAL: Heap on socket 0 was expanded by 258MB 00:07:42.236 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.236 EAL: request: mp_malloc_sync 00:07:42.236 EAL: No shared files mode enabled, IPC is disabled 00:07:42.236 EAL: Heap on socket 0 was shrunk by 258MB 00:07:42.493 EAL: Trying to obtain current memory policy. 00:07:42.493 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:42.752 EAL: Restoring previous memory policy: 4 00:07:42.752 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.752 EAL: request: mp_malloc_sync 00:07:42.752 EAL: No shared files mode enabled, IPC is disabled 00:07:42.752 EAL: Heap on socket 0 was expanded by 514MB 00:07:43.686 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.686 EAL: request: mp_malloc_sync 00:07:43.686 EAL: No shared files mode enabled, IPC is disabled 00:07:43.686 EAL: Heap on socket 0 was shrunk by 514MB 00:07:44.251 EAL: Trying to obtain current memory policy. 00:07:44.251 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:44.509 EAL: Restoring previous memory policy: 4 00:07:44.509 EAL: Calling mem event callback 'spdk:(nil)' 00:07:44.509 EAL: request: mp_malloc_sync 00:07:44.509 EAL: No shared files mode enabled, IPC is disabled 00:07:44.509 EAL: Heap on socket 0 was expanded by 1026MB 00:07:46.407 EAL: Calling mem event callback 'spdk:(nil)' 00:07:46.407 EAL: request: mp_malloc_sync 00:07:46.407 EAL: No shared files mode enabled, IPC is disabled 00:07:46.407 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:48.306 passed 00:07:48.306 00:07:48.306 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.307 suites 1 1 n/a 0 0 00:07:48.307 tests 2 2 2 0 0 00:07:48.307 asserts 5775 5775 5775 0 n/a 00:07:48.307 00:07:48.307 Elapsed time = 7.526 seconds 00:07:48.307 EAL: Calling mem event callback 'spdk:(nil)' 00:07:48.307 EAL: request: mp_malloc_sync 00:07:48.307 EAL: No shared files mode enabled, IPC is disabled 00:07:48.307 EAL: Heap on socket 0 was shrunk by 2MB 00:07:48.307 EAL: No shared files mode enabled, IPC is disabled 00:07:48.307 EAL: No shared files mode enabled, IPC is disabled 00:07:48.307 EAL: No shared files mode enabled, IPC is disabled 00:07:48.307 00:07:48.307 real 0m7.867s 00:07:48.307 user 0m6.689s 00:07:48.307 sys 0m1.001s 00:07:48.307 07:04:45 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.307 07:04:45 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 ************************************ 00:07:48.307 END TEST env_vtophys 00:07:48.307 ************************************ 00:07:48.307 07:04:45 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:48.307 07:04:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.307 07:04:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.307 07:04:45 env -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 ************************************ 00:07:48.307 START TEST env_pci 00:07:48.307 ************************************ 00:07:48.307 07:04:45 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:48.307 00:07:48.307 00:07:48.307 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.307 http://cunit.sourceforge.net/ 00:07:48.307 00:07:48.307 00:07:48.307 Suite: pci 00:07:48.307 Test: pci_hook ...[2024-11-20 07:04:45.234742] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56621 has claimed it 00:07:48.307 EAL: Cannot find device (10000:00:01.0) 00:07:48.307 EAL: Failed to attach device on primary process 00:07:48.307 passed 00:07:48.307 00:07:48.307 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.307 suites 1 1 n/a 0 0 00:07:48.307 tests 1 1 1 0 0 00:07:48.307 asserts 25 25 25 0 n/a 00:07:48.307 00:07:48.307 Elapsed time = 0.007 seconds 00:07:48.307 00:07:48.307 real 0m0.078s 00:07:48.307 user 0m0.040s 00:07:48.307 sys 0m0.037s 00:07:48.307 07:04:45 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.307 ************************************ 00:07:48.307 END TEST env_pci 00:07:48.307 ************************************ 00:07:48.307 07:04:45 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 07:04:45 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:48.307 07:04:45 env -- env/env.sh@15 -- # uname 00:07:48.307 07:04:45 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:48.307 07:04:45 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:48.307 07:04:45 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:48.307 07:04:45 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:48.307 07:04:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.307 07:04:45 env -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 ************************************ 00:07:48.307 START TEST env_dpdk_post_init 00:07:48.307 ************************************ 00:07:48.307 07:04:45 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:48.307 EAL: Detected CPU lcores: 10 00:07:48.307 EAL: Detected NUMA nodes: 1 00:07:48.307 EAL: Detected shared linkage of DPDK 00:07:48.307 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:48.307 EAL: Selected IOVA mode 'PA' 00:07:48.307 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:48.307 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:48.307 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:48.307 Starting DPDK initialization... 00:07:48.307 Starting SPDK post initialization... 00:07:48.307 SPDK NVMe probe 00:07:48.307 Attaching to 0000:00:10.0 00:07:48.307 Attaching to 0000:00:11.0 00:07:48.307 Attached to 0000:00:10.0 00:07:48.307 Attached to 0000:00:11.0 00:07:48.307 Cleaning up... 00:07:48.307 00:07:48.307 real 0m0.272s 00:07:48.307 user 0m0.090s 00:07:48.307 sys 0m0.080s 00:07:48.307 ************************************ 00:07:48.307 END TEST env_dpdk_post_init 00:07:48.307 ************************************ 00:07:48.307 07:04:45 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.307 07:04:45 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:48.307 07:04:45 env -- env/env.sh@26 -- # uname 00:07:48.307 07:04:45 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:48.307 07:04:45 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:48.307 07:04:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.307 07:04:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.307 07:04:45 env -- common/autotest_common.sh@10 -- # set +x 00:07:48.566 ************************************ 00:07:48.566 START TEST env_mem_callbacks 00:07:48.566 ************************************ 00:07:48.566 07:04:45 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:48.566 EAL: Detected CPU lcores: 10 00:07:48.566 EAL: Detected NUMA nodes: 1 00:07:48.566 EAL: Detected shared linkage of DPDK 00:07:48.566 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:48.566 EAL: Selected IOVA mode 'PA' 00:07:48.566 00:07:48.566 00:07:48.566 CUnit - A unit testing framework for C - Version 2.1-3 00:07:48.566 http://cunit.sourceforge.net/ 00:07:48.566 00:07:48.566 00:07:48.566 Suite: memory 00:07:48.566 Test: test ... 00:07:48.566 register 0x200000200000 2097152 00:07:48.566 malloc 3145728 00:07:48.566 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:48.566 register 0x200000400000 4194304 00:07:48.566 buf 0x2000004fffc0 len 3145728 PASSED 00:07:48.566 malloc 64 00:07:48.566 buf 0x2000004ffec0 len 64 PASSED 00:07:48.566 malloc 4194304 00:07:48.566 register 0x200000800000 6291456 00:07:48.566 buf 0x2000009fffc0 len 4194304 PASSED 00:07:48.566 free 0x2000004fffc0 3145728 00:07:48.566 free 0x2000004ffec0 64 00:07:48.566 unregister 0x200000400000 4194304 PASSED 00:07:48.566 free 0x2000009fffc0 4194304 00:07:48.566 unregister 0x200000800000 6291456 PASSED 00:07:48.566 malloc 8388608 00:07:48.566 register 0x200000400000 10485760 00:07:48.566 buf 0x2000005fffc0 len 8388608 PASSED 00:07:48.566 free 0x2000005fffc0 8388608 00:07:48.566 unregister 0x200000400000 10485760 PASSED 00:07:48.566 passed 00:07:48.566 00:07:48.566 Run Summary: Type Total Ran Passed Failed Inactive 00:07:48.566 suites 1 1 n/a 0 0 00:07:48.566 tests 1 1 1 0 0 00:07:48.566 asserts 15 15 15 0 n/a 00:07:48.566 00:07:48.566 Elapsed time = 0.077 seconds 00:07:48.824 00:07:48.824 real 0m0.268s 00:07:48.824 user 0m0.098s 00:07:48.824 sys 0m0.066s 00:07:48.824 07:04:45 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.824 07:04:45 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:48.824 ************************************ 00:07:48.824 END TEST env_mem_callbacks 00:07:48.824 ************************************ 00:07:48.824 ************************************ 00:07:48.824 END TEST env 00:07:48.824 ************************************ 00:07:48.824 00:07:48.824 real 0m9.222s 00:07:48.824 user 0m7.422s 00:07:48.824 sys 0m1.409s 00:07:48.824 07:04:45 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.824 07:04:45 env -- common/autotest_common.sh@10 -- # set +x 00:07:48.824 07:04:45 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:48.824 07:04:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.824 07:04:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.824 07:04:45 -- common/autotest_common.sh@10 -- # set +x 00:07:48.824 ************************************ 00:07:48.824 START TEST rpc 00:07:48.824 ************************************ 00:07:48.824 07:04:45 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:48.824 * Looking for test storage... 00:07:48.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:48.824 07:04:46 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:48.824 07:04:46 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:48.824 07:04:46 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.082 07:04:46 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.082 07:04:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.082 07:04:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.082 07:04:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.082 07:04:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.082 07:04:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.082 07:04:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.082 07:04:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.082 07:04:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.082 07:04:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.082 07:04:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.082 07:04:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.082 07:04:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:49.082 07:04:46 rpc -- scripts/common.sh@345 -- # : 1 00:07:49.082 07:04:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.082 07:04:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.082 07:04:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:49.082 07:04:46 rpc -- scripts/common.sh@353 -- # local d=1 00:07:49.082 07:04:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.082 07:04:46 rpc -- scripts/common.sh@355 -- # echo 1 00:07:49.082 07:04:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.082 07:04:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:49.082 07:04:46 rpc -- scripts/common.sh@353 -- # local d=2 00:07:49.082 07:04:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.082 07:04:46 rpc -- scripts/common.sh@355 -- # echo 2 00:07:49.082 07:04:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.082 07:04:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.082 07:04:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.082 07:04:46 rpc -- scripts/common.sh@368 -- # return 0 00:07:49.082 07:04:46 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.082 07:04:46 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.082 --rc genhtml_branch_coverage=1 00:07:49.082 --rc genhtml_function_coverage=1 00:07:49.082 --rc genhtml_legend=1 00:07:49.082 --rc geninfo_all_blocks=1 00:07:49.082 --rc geninfo_unexecuted_blocks=1 00:07:49.082 00:07:49.082 ' 00:07:49.082 07:04:46 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.082 --rc genhtml_branch_coverage=1 00:07:49.082 --rc genhtml_function_coverage=1 00:07:49.082 --rc genhtml_legend=1 00:07:49.082 --rc geninfo_all_blocks=1 00:07:49.082 --rc geninfo_unexecuted_blocks=1 00:07:49.082 00:07:49.082 ' 00:07:49.082 07:04:46 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.082 --rc genhtml_branch_coverage=1 00:07:49.082 --rc genhtml_function_coverage=1 00:07:49.082 --rc genhtml_legend=1 00:07:49.082 --rc geninfo_all_blocks=1 00:07:49.082 --rc geninfo_unexecuted_blocks=1 00:07:49.082 00:07:49.082 ' 00:07:49.082 07:04:46 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.082 --rc genhtml_branch_coverage=1 00:07:49.082 --rc genhtml_function_coverage=1 00:07:49.082 --rc genhtml_legend=1 00:07:49.082 --rc geninfo_all_blocks=1 00:07:49.082 --rc geninfo_unexecuted_blocks=1 00:07:49.082 00:07:49.082 ' 00:07:49.082 07:04:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56748 00:07:49.082 07:04:46 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:49.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.082 07:04:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:49.082 07:04:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56748 00:07:49.082 07:04:46 rpc -- common/autotest_common.sh@835 -- # '[' -z 56748 ']' 00:07:49.083 07:04:46 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.083 07:04:46 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.083 07:04:46 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.083 07:04:46 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.083 07:04:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.083 [2024-11-20 07:04:46.284668] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:07:49.083 [2024-11-20 07:04:46.284840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56748 ] 00:07:49.341 [2024-11-20 07:04:46.460855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.341 [2024-11-20 07:04:46.654186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:49.341 [2024-11-20 07:04:46.654280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56748' to capture a snapshot of events at runtime. 00:07:49.341 [2024-11-20 07:04:46.654298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.341 [2024-11-20 07:04:46.654315] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.341 [2024-11-20 07:04:46.654326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56748 for offline analysis/debug. 00:07:49.341 [2024-11-20 07:04:46.655674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.276 07:04:47 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.276 07:04:47 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:50.276 07:04:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:50.276 07:04:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:50.276 07:04:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:50.276 07:04:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:50.276 07:04:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.276 07:04:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.276 07:04:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.276 ************************************ 00:07:50.276 START TEST rpc_integrity 00:07:50.276 ************************************ 00:07:50.276 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:50.276 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:50.276 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.276 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.276 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.276 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:50.276 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:50.535 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:50.535 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:50.535 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.535 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.535 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.535 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:50.535 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:50.535 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.535 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.535 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.535 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:50.535 { 00:07:50.535 "name": "Malloc0", 00:07:50.535 "aliases": [ 00:07:50.535 "e3326023-1282-4b51-9faf-e9a489b4a15b" 00:07:50.535 ], 00:07:50.535 "product_name": "Malloc disk", 00:07:50.535 "block_size": 512, 00:07:50.535 "num_blocks": 16384, 00:07:50.535 "uuid": "e3326023-1282-4b51-9faf-e9a489b4a15b", 00:07:50.535 "assigned_rate_limits": { 00:07:50.535 "rw_ios_per_sec": 0, 00:07:50.535 "rw_mbytes_per_sec": 0, 00:07:50.535 "r_mbytes_per_sec": 0, 00:07:50.535 "w_mbytes_per_sec": 0 00:07:50.535 }, 00:07:50.535 "claimed": false, 00:07:50.535 "zoned": false, 00:07:50.535 "supported_io_types": { 00:07:50.535 "read": true, 00:07:50.535 "write": true, 00:07:50.535 "unmap": true, 00:07:50.535 "flush": true, 00:07:50.535 "reset": true, 00:07:50.535 "nvme_admin": false, 00:07:50.535 "nvme_io": false, 00:07:50.535 "nvme_io_md": false, 00:07:50.535 "write_zeroes": true, 00:07:50.535 "zcopy": true, 00:07:50.535 "get_zone_info": false, 00:07:50.535 "zone_management": false, 00:07:50.535 "zone_append": false, 00:07:50.535 "compare": false, 00:07:50.535 "compare_and_write": false, 00:07:50.535 "abort": true, 00:07:50.535 "seek_hole": false, 00:07:50.535 "seek_data": false, 00:07:50.535 "copy": true, 00:07:50.535 "nvme_iov_md": false 00:07:50.535 }, 00:07:50.535 "memory_domains": [ 00:07:50.535 { 00:07:50.535 "dma_device_id": "system", 00:07:50.535 "dma_device_type": 1 00:07:50.535 }, 00:07:50.535 { 00:07:50.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.535 "dma_device_type": 2 00:07:50.535 } 00:07:50.535 ], 00:07:50.535 "driver_specific": {} 00:07:50.535 } 00:07:50.535 ]' 00:07:50.535 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:50.535 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:50.535 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:50.535 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.535 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.535 [2024-11-20 07:04:47.699861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:50.535 [2024-11-20 07:04:47.699970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.535 [2024-11-20 07:04:47.700007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:50.535 [2024-11-20 07:04:47.700031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.535 [2024-11-20 07:04:47.703097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.535 [2024-11-20 07:04:47.703156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:50.535 Passthru0 00:07:50.535 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.535 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:50.535 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.535 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.535 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.535 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:50.535 { 00:07:50.535 "name": "Malloc0", 00:07:50.535 "aliases": [ 00:07:50.535 "e3326023-1282-4b51-9faf-e9a489b4a15b" 00:07:50.535 ], 00:07:50.535 "product_name": "Malloc disk", 00:07:50.535 "block_size": 512, 00:07:50.535 "num_blocks": 16384, 00:07:50.535 "uuid": "e3326023-1282-4b51-9faf-e9a489b4a15b", 00:07:50.535 "assigned_rate_limits": { 00:07:50.535 "rw_ios_per_sec": 0, 00:07:50.535 "rw_mbytes_per_sec": 0, 00:07:50.535 "r_mbytes_per_sec": 0, 00:07:50.535 "w_mbytes_per_sec": 0 00:07:50.535 }, 00:07:50.535 "claimed": true, 00:07:50.535 "claim_type": "exclusive_write", 00:07:50.535 "zoned": false, 00:07:50.535 "supported_io_types": { 00:07:50.535 "read": true, 00:07:50.535 "write": true, 00:07:50.535 "unmap": true, 00:07:50.535 "flush": true, 00:07:50.535 "reset": true, 00:07:50.535 "nvme_admin": false, 00:07:50.535 "nvme_io": false, 00:07:50.535 "nvme_io_md": false, 00:07:50.535 "write_zeroes": true, 00:07:50.535 "zcopy": true, 00:07:50.535 "get_zone_info": false, 00:07:50.535 "zone_management": false, 00:07:50.535 "zone_append": false, 00:07:50.535 "compare": false, 00:07:50.535 "compare_and_write": false, 00:07:50.535 "abort": true, 00:07:50.535 "seek_hole": false, 00:07:50.536 "seek_data": false, 00:07:50.536 "copy": true, 00:07:50.536 "nvme_iov_md": false 00:07:50.536 }, 00:07:50.536 "memory_domains": [ 00:07:50.536 { 00:07:50.536 "dma_device_id": "system", 00:07:50.536 "dma_device_type": 1 00:07:50.536 }, 00:07:50.536 { 00:07:50.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.536 "dma_device_type": 2 00:07:50.536 } 00:07:50.536 ], 00:07:50.536 "driver_specific": {} 00:07:50.536 }, 00:07:50.536 { 00:07:50.536 "name": "Passthru0", 00:07:50.536 "aliases": [ 00:07:50.536 "b75d61e6-625b-53b5-939a-64f532e7d8bb" 00:07:50.536 ], 00:07:50.536 "product_name": "passthru", 00:07:50.536 "block_size": 512, 00:07:50.536 "num_blocks": 16384, 00:07:50.536 "uuid": "b75d61e6-625b-53b5-939a-64f532e7d8bb", 00:07:50.536 "assigned_rate_limits": { 00:07:50.536 "rw_ios_per_sec": 0, 00:07:50.536 "rw_mbytes_per_sec": 0, 00:07:50.536 "r_mbytes_per_sec": 0, 00:07:50.536 "w_mbytes_per_sec": 0 00:07:50.536 }, 00:07:50.536 "claimed": false, 00:07:50.536 "zoned": false, 00:07:50.536 "supported_io_types": { 00:07:50.536 "read": true, 00:07:50.536 "write": true, 00:07:50.536 "unmap": true, 00:07:50.536 "flush": true, 00:07:50.536 "reset": true, 00:07:50.536 "nvme_admin": false, 00:07:50.536 "nvme_io": false, 00:07:50.536 "nvme_io_md": false, 00:07:50.536 "write_zeroes": true, 00:07:50.536 "zcopy": true, 00:07:50.536 "get_zone_info": false, 00:07:50.536 "zone_management": false, 00:07:50.536 "zone_append": false, 00:07:50.536 "compare": false, 00:07:50.536 "compare_and_write": false, 00:07:50.536 "abort": true, 00:07:50.536 "seek_hole": false, 00:07:50.536 "seek_data": false, 00:07:50.536 "copy": true, 00:07:50.536 "nvme_iov_md": false 00:07:50.536 }, 00:07:50.536 "memory_domains": [ 00:07:50.536 { 00:07:50.536 "dma_device_id": "system", 00:07:50.536 "dma_device_type": 1 00:07:50.536 }, 00:07:50.536 { 00:07:50.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.536 "dma_device_type": 2 00:07:50.536 } 00:07:50.536 ], 00:07:50.536 "driver_specific": { 00:07:50.536 "passthru": { 00:07:50.536 "name": "Passthru0", 00:07:50.536 "base_bdev_name": "Malloc0" 00:07:50.536 } 00:07:50.536 } 00:07:50.536 } 00:07:50.536 ]' 00:07:50.536 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:50.536 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:50.536 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:50.536 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.536 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.536 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.536 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:50.536 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.536 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.536 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.536 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:50.536 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.536 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.536 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.536 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:50.536 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:50.794 ************************************ 00:07:50.794 END TEST rpc_integrity 00:07:50.794 ************************************ 00:07:50.794 07:04:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:50.794 00:07:50.794 real 0m0.344s 00:07:50.794 user 0m0.212s 00:07:50.794 sys 0m0.037s 00:07:50.794 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.794 07:04:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.794 07:04:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:50.794 07:04:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.794 07:04:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.794 07:04:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.794 ************************************ 00:07:50.794 START TEST rpc_plugins 00:07:50.794 ************************************ 00:07:50.795 07:04:47 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:50.795 07:04:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:50.795 07:04:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.795 07:04:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:50.795 07:04:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.795 07:04:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:50.795 07:04:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:50.795 07:04:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.795 07:04:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:50.795 07:04:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.795 07:04:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:50.795 { 00:07:50.795 "name": "Malloc1", 00:07:50.795 "aliases": [ 00:07:50.795 "3bc37f04-e7ba-4527-96e2-c917ecc689c7" 00:07:50.795 ], 00:07:50.795 "product_name": "Malloc disk", 00:07:50.795 "block_size": 4096, 00:07:50.795 "num_blocks": 256, 00:07:50.795 "uuid": "3bc37f04-e7ba-4527-96e2-c917ecc689c7", 00:07:50.795 "assigned_rate_limits": { 00:07:50.795 "rw_ios_per_sec": 0, 00:07:50.795 "rw_mbytes_per_sec": 0, 00:07:50.795 "r_mbytes_per_sec": 0, 00:07:50.795 "w_mbytes_per_sec": 0 00:07:50.795 }, 00:07:50.795 "claimed": false, 00:07:50.795 "zoned": false, 00:07:50.795 "supported_io_types": { 00:07:50.795 "read": true, 00:07:50.795 "write": true, 00:07:50.795 "unmap": true, 00:07:50.795 "flush": true, 00:07:50.795 "reset": true, 00:07:50.795 "nvme_admin": false, 00:07:50.795 "nvme_io": false, 00:07:50.795 "nvme_io_md": false, 00:07:50.795 "write_zeroes": true, 00:07:50.795 "zcopy": true, 00:07:50.795 "get_zone_info": false, 00:07:50.795 "zone_management": false, 00:07:50.795 "zone_append": false, 00:07:50.795 "compare": false, 00:07:50.795 "compare_and_write": false, 00:07:50.795 "abort": true, 00:07:50.795 "seek_hole": false, 00:07:50.795 "seek_data": false, 00:07:50.795 "copy": true, 00:07:50.795 "nvme_iov_md": false 00:07:50.795 }, 00:07:50.795 "memory_domains": [ 00:07:50.795 { 00:07:50.795 "dma_device_id": "system", 00:07:50.795 "dma_device_type": 1 00:07:50.795 }, 00:07:50.795 { 00:07:50.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.795 "dma_device_type": 2 00:07:50.795 } 00:07:50.795 ], 00:07:50.795 "driver_specific": {} 00:07:50.795 } 00:07:50.795 ]' 00:07:50.795 07:04:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:50.795 07:04:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:50.795 07:04:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:50.795 07:04:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.795 07:04:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:50.795 07:04:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.795 07:04:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:50.795 07:04:48 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.795 07:04:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:50.795 07:04:48 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.795 07:04:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:50.795 07:04:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:50.795 ************************************ 00:07:50.795 END TEST rpc_plugins 00:07:50.795 ************************************ 00:07:50.795 07:04:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:50.795 00:07:50.795 real 0m0.147s 00:07:50.795 user 0m0.087s 00:07:50.795 sys 0m0.020s 00:07:50.795 07:04:48 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.795 07:04:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:50.795 07:04:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:50.795 07:04:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.795 07:04:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.795 07:04:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.795 ************************************ 00:07:50.795 START TEST rpc_trace_cmd_test 00:07:50.795 ************************************ 00:07:50.795 07:04:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:51.125 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56748", 00:07:51.125 "tpoint_group_mask": "0x8", 00:07:51.125 "iscsi_conn": { 00:07:51.125 "mask": "0x2", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "scsi": { 00:07:51.125 "mask": "0x4", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "bdev": { 00:07:51.125 "mask": "0x8", 00:07:51.125 "tpoint_mask": "0xffffffffffffffff" 00:07:51.125 }, 00:07:51.125 "nvmf_rdma": { 00:07:51.125 "mask": "0x10", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "nvmf_tcp": { 00:07:51.125 "mask": "0x20", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "ftl": { 00:07:51.125 "mask": "0x40", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "blobfs": { 00:07:51.125 "mask": "0x80", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "dsa": { 00:07:51.125 "mask": "0x200", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "thread": { 00:07:51.125 "mask": "0x400", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "nvme_pcie": { 00:07:51.125 "mask": "0x800", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "iaa": { 00:07:51.125 "mask": "0x1000", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "nvme_tcp": { 00:07:51.125 "mask": "0x2000", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "bdev_nvme": { 00:07:51.125 "mask": "0x4000", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "sock": { 00:07:51.125 "mask": "0x8000", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "blob": { 00:07:51.125 "mask": "0x10000", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "bdev_raid": { 00:07:51.125 "mask": "0x20000", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 }, 00:07:51.125 "scheduler": { 00:07:51.125 "mask": "0x40000", 00:07:51.125 "tpoint_mask": "0x0" 00:07:51.125 } 00:07:51.125 }' 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:51.125 ************************************ 00:07:51.125 END TEST rpc_trace_cmd_test 00:07:51.125 ************************************ 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:51.125 00:07:51.125 real 0m0.298s 00:07:51.125 user 0m0.261s 00:07:51.125 sys 0m0.026s 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.125 07:04:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.387 07:04:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:51.387 07:04:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:51.387 07:04:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:51.387 07:04:48 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.387 07:04:48 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.387 07:04:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.387 ************************************ 00:07:51.387 START TEST rpc_daemon_integrity 00:07:51.387 ************************************ 00:07:51.387 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:51.387 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:51.387 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.387 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:51.388 { 00:07:51.388 "name": "Malloc2", 00:07:51.388 "aliases": [ 00:07:51.388 "504aa4c9-984e-4641-8669-bcfff28c67a3" 00:07:51.388 ], 00:07:51.388 "product_name": "Malloc disk", 00:07:51.388 "block_size": 512, 00:07:51.388 "num_blocks": 16384, 00:07:51.388 "uuid": "504aa4c9-984e-4641-8669-bcfff28c67a3", 00:07:51.388 "assigned_rate_limits": { 00:07:51.388 "rw_ios_per_sec": 0, 00:07:51.388 "rw_mbytes_per_sec": 0, 00:07:51.388 "r_mbytes_per_sec": 0, 00:07:51.388 "w_mbytes_per_sec": 0 00:07:51.388 }, 00:07:51.388 "claimed": false, 00:07:51.388 "zoned": false, 00:07:51.388 "supported_io_types": { 00:07:51.388 "read": true, 00:07:51.388 "write": true, 00:07:51.388 "unmap": true, 00:07:51.388 "flush": true, 00:07:51.388 "reset": true, 00:07:51.388 "nvme_admin": false, 00:07:51.388 "nvme_io": false, 00:07:51.388 "nvme_io_md": false, 00:07:51.388 "write_zeroes": true, 00:07:51.388 "zcopy": true, 00:07:51.388 "get_zone_info": false, 00:07:51.388 "zone_management": false, 00:07:51.388 "zone_append": false, 00:07:51.388 "compare": false, 00:07:51.388 "compare_and_write": false, 00:07:51.388 "abort": true, 00:07:51.388 "seek_hole": false, 00:07:51.388 "seek_data": false, 00:07:51.388 "copy": true, 00:07:51.388 "nvme_iov_md": false 00:07:51.388 }, 00:07:51.388 "memory_domains": [ 00:07:51.388 { 00:07:51.388 "dma_device_id": "system", 00:07:51.388 "dma_device_type": 1 00:07:51.388 }, 00:07:51.388 { 00:07:51.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.388 "dma_device_type": 2 00:07:51.388 } 00:07:51.388 ], 00:07:51.388 "driver_specific": {} 00:07:51.388 } 00:07:51.388 ]' 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:51.388 [2024-11-20 07:04:48.624692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:51.388 [2024-11-20 07:04:48.624784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.388 [2024-11-20 07:04:48.624819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:51.388 [2024-11-20 07:04:48.624838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.388 [2024-11-20 07:04:48.628061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.388 [2024-11-20 07:04:48.628293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:51.388 Passthru0 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:51.388 { 00:07:51.388 "name": "Malloc2", 00:07:51.388 "aliases": [ 00:07:51.388 "504aa4c9-984e-4641-8669-bcfff28c67a3" 00:07:51.388 ], 00:07:51.388 "product_name": "Malloc disk", 00:07:51.388 "block_size": 512, 00:07:51.388 "num_blocks": 16384, 00:07:51.388 "uuid": "504aa4c9-984e-4641-8669-bcfff28c67a3", 00:07:51.388 "assigned_rate_limits": { 00:07:51.388 "rw_ios_per_sec": 0, 00:07:51.388 "rw_mbytes_per_sec": 0, 00:07:51.388 "r_mbytes_per_sec": 0, 00:07:51.388 "w_mbytes_per_sec": 0 00:07:51.388 }, 00:07:51.388 "claimed": true, 00:07:51.388 "claim_type": "exclusive_write", 00:07:51.388 "zoned": false, 00:07:51.388 "supported_io_types": { 00:07:51.388 "read": true, 00:07:51.388 "write": true, 00:07:51.388 "unmap": true, 00:07:51.388 "flush": true, 00:07:51.388 "reset": true, 00:07:51.388 "nvme_admin": false, 00:07:51.388 "nvme_io": false, 00:07:51.388 "nvme_io_md": false, 00:07:51.388 "write_zeroes": true, 00:07:51.388 "zcopy": true, 00:07:51.388 "get_zone_info": false, 00:07:51.388 "zone_management": false, 00:07:51.388 "zone_append": false, 00:07:51.388 "compare": false, 00:07:51.388 "compare_and_write": false, 00:07:51.388 "abort": true, 00:07:51.388 "seek_hole": false, 00:07:51.388 "seek_data": false, 00:07:51.388 "copy": true, 00:07:51.388 "nvme_iov_md": false 00:07:51.388 }, 00:07:51.388 "memory_domains": [ 00:07:51.388 { 00:07:51.388 "dma_device_id": "system", 00:07:51.388 "dma_device_type": 1 00:07:51.388 }, 00:07:51.388 { 00:07:51.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.388 "dma_device_type": 2 00:07:51.388 } 00:07:51.388 ], 00:07:51.388 "driver_specific": {} 00:07:51.388 }, 00:07:51.388 { 00:07:51.388 "name": "Passthru0", 00:07:51.388 "aliases": [ 00:07:51.388 "0cecb5ec-7c65-5b0b-b7e0-adb074e77ccc" 00:07:51.388 ], 00:07:51.388 "product_name": "passthru", 00:07:51.388 "block_size": 512, 00:07:51.388 "num_blocks": 16384, 00:07:51.388 "uuid": "0cecb5ec-7c65-5b0b-b7e0-adb074e77ccc", 00:07:51.388 "assigned_rate_limits": { 00:07:51.388 "rw_ios_per_sec": 0, 00:07:51.388 "rw_mbytes_per_sec": 0, 00:07:51.388 "r_mbytes_per_sec": 0, 00:07:51.388 "w_mbytes_per_sec": 0 00:07:51.388 }, 00:07:51.388 "claimed": false, 00:07:51.388 "zoned": false, 00:07:51.388 "supported_io_types": { 00:07:51.388 "read": true, 00:07:51.388 "write": true, 00:07:51.388 "unmap": true, 00:07:51.388 "flush": true, 00:07:51.388 "reset": true, 00:07:51.388 "nvme_admin": false, 00:07:51.388 "nvme_io": false, 00:07:51.388 "nvme_io_md": false, 00:07:51.388 "write_zeroes": true, 00:07:51.388 "zcopy": true, 00:07:51.388 "get_zone_info": false, 00:07:51.388 "zone_management": false, 00:07:51.388 "zone_append": false, 00:07:51.388 "compare": false, 00:07:51.388 "compare_and_write": false, 00:07:51.388 "abort": true, 00:07:51.388 "seek_hole": false, 00:07:51.388 "seek_data": false, 00:07:51.388 "copy": true, 00:07:51.388 "nvme_iov_md": false 00:07:51.388 }, 00:07:51.388 "memory_domains": [ 00:07:51.388 { 00:07:51.388 "dma_device_id": "system", 00:07:51.388 "dma_device_type": 1 00:07:51.388 }, 00:07:51.388 { 00:07:51.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.388 "dma_device_type": 2 00:07:51.388 } 00:07:51.388 ], 00:07:51.388 "driver_specific": { 00:07:51.388 "passthru": { 00:07:51.388 "name": "Passthru0", 00:07:51.388 "base_bdev_name": "Malloc2" 00:07:51.388 } 00:07:51.388 } 00:07:51.388 } 00:07:51.388 ]' 00:07:51.388 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:51.647 ************************************ 00:07:51.647 END TEST rpc_daemon_integrity 00:07:51.647 ************************************ 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:51.647 00:07:51.647 real 0m0.370s 00:07:51.647 user 0m0.238s 00:07:51.647 sys 0m0.037s 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.647 07:04:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:51.647 07:04:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:51.647 07:04:48 rpc -- rpc/rpc.sh@84 -- # killprocess 56748 00:07:51.647 07:04:48 rpc -- common/autotest_common.sh@954 -- # '[' -z 56748 ']' 00:07:51.647 07:04:48 rpc -- common/autotest_common.sh@958 -- # kill -0 56748 00:07:51.647 07:04:48 rpc -- common/autotest_common.sh@959 -- # uname 00:07:51.647 07:04:48 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.647 07:04:48 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56748 00:07:51.647 killing process with pid 56748 00:07:51.647 07:04:48 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.647 07:04:48 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.647 07:04:48 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56748' 00:07:51.647 07:04:48 rpc -- common/autotest_common.sh@973 -- # kill 56748 00:07:51.647 07:04:48 rpc -- common/autotest_common.sh@978 -- # wait 56748 00:07:54.177 ************************************ 00:07:54.177 END TEST rpc 00:07:54.177 ************************************ 00:07:54.177 00:07:54.177 real 0m5.217s 00:07:54.177 user 0m6.005s 00:07:54.177 sys 0m0.849s 00:07:54.177 07:04:51 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.177 07:04:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.177 07:04:51 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:54.177 07:04:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.177 07:04:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.177 07:04:51 -- common/autotest_common.sh@10 -- # set +x 00:07:54.177 ************************************ 00:07:54.177 START TEST skip_rpc 00:07:54.177 ************************************ 00:07:54.177 07:04:51 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:54.177 * Looking for test storage... 00:07:54.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:54.177 07:04:51 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:54.177 07:04:51 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:54.177 07:04:51 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:54.177 07:04:51 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:54.177 07:04:51 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.177 07:04:51 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.177 07:04:51 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.177 07:04:51 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.177 07:04:51 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.177 07:04:51 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.177 07:04:51 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.177 07:04:51 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.177 07:04:51 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.177 07:04:51 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.177 07:04:51 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.177 07:04:51 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:54.177 07:04:51 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.178 07:04:51 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:54.178 07:04:51 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.178 07:04:51 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:54.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.178 --rc genhtml_branch_coverage=1 00:07:54.178 --rc genhtml_function_coverage=1 00:07:54.178 --rc genhtml_legend=1 00:07:54.178 --rc geninfo_all_blocks=1 00:07:54.178 --rc geninfo_unexecuted_blocks=1 00:07:54.178 00:07:54.178 ' 00:07:54.178 07:04:51 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:54.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.178 --rc genhtml_branch_coverage=1 00:07:54.178 --rc genhtml_function_coverage=1 00:07:54.178 --rc genhtml_legend=1 00:07:54.178 --rc geninfo_all_blocks=1 00:07:54.178 --rc geninfo_unexecuted_blocks=1 00:07:54.178 00:07:54.178 ' 00:07:54.178 07:04:51 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:54.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.178 --rc genhtml_branch_coverage=1 00:07:54.178 --rc genhtml_function_coverage=1 00:07:54.178 --rc genhtml_legend=1 00:07:54.178 --rc geninfo_all_blocks=1 00:07:54.178 --rc geninfo_unexecuted_blocks=1 00:07:54.178 00:07:54.178 ' 00:07:54.178 07:04:51 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:54.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.178 --rc genhtml_branch_coverage=1 00:07:54.178 --rc genhtml_function_coverage=1 00:07:54.178 --rc genhtml_legend=1 00:07:54.178 --rc geninfo_all_blocks=1 00:07:54.178 --rc geninfo_unexecuted_blocks=1 00:07:54.178 00:07:54.178 ' 00:07:54.178 07:04:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:54.178 07:04:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:54.178 07:04:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:54.178 07:04:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.178 07:04:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.178 07:04:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:54.178 ************************************ 00:07:54.178 START TEST skip_rpc 00:07:54.178 ************************************ 00:07:54.178 07:04:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:54.178 07:04:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56977 00:07:54.178 07:04:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:54.178 07:04:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:54.178 07:04:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:54.436 [2024-11-20 07:04:51.593220] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:07:54.436 [2024-11-20 07:04:51.593710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56977 ] 00:07:54.695 [2024-11-20 07:04:51.786562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.695 [2024-11-20 07:04:51.919300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56977 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56977 ']' 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56977 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56977 00:07:59.958 killing process with pid 56977 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56977' 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56977 00:07:59.958 07:04:56 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56977 00:08:01.859 00:08:01.859 real 0m7.302s 00:08:01.859 user 0m6.705s 00:08:01.859 sys 0m0.485s 00:08:01.859 07:04:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.859 07:04:58 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.859 ************************************ 00:08:01.859 END TEST skip_rpc 00:08:01.859 ************************************ 00:08:01.859 07:04:58 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:08:01.859 07:04:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.859 07:04:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.859 07:04:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:01.859 ************************************ 00:08:01.859 START TEST skip_rpc_with_json 00:08:01.859 ************************************ 00:08:01.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.859 07:04:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:08:01.859 07:04:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:08:01.859 07:04:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57081 00:08:01.859 07:04:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:01.859 07:04:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57081 00:08:01.859 07:04:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:01.859 07:04:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57081 ']' 00:08:01.859 07:04:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.859 07:04:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.859 07:04:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.859 07:04:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.859 07:04:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:01.859 [2024-11-20 07:04:58.940541] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:08:01.859 [2024-11-20 07:04:58.941192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57081 ] 00:08:01.859 [2024-11-20 07:04:59.145996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.116 [2024-11-20 07:04:59.281439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.050 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.050 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:08:03.050 07:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:08:03.050 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.050 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:03.050 [2024-11-20 07:05:00.293058] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:08:03.050 request: 00:08:03.050 { 00:08:03.050 "trtype": "tcp", 00:08:03.050 "method": "nvmf_get_transports", 00:08:03.050 "req_id": 1 00:08:03.050 } 00:08:03.050 Got JSON-RPC error response 00:08:03.050 response: 00:08:03.050 { 00:08:03.050 "code": -19, 00:08:03.050 "message": "No such device" 00:08:03.050 } 00:08:03.050 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:03.050 07:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:08:03.050 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.050 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:03.051 [2024-11-20 07:05:00.305251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.051 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.051 07:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:08:03.051 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.051 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:03.309 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.309 07:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:03.309 { 00:08:03.309 "subsystems": [ 00:08:03.309 { 00:08:03.309 "subsystem": "fsdev", 00:08:03.309 "config": [ 00:08:03.309 { 00:08:03.309 "method": "fsdev_set_opts", 00:08:03.309 "params": { 00:08:03.309 "fsdev_io_pool_size": 65535, 00:08:03.309 "fsdev_io_cache_size": 256 00:08:03.309 } 00:08:03.309 } 00:08:03.309 ] 00:08:03.309 }, 00:08:03.309 { 00:08:03.309 "subsystem": "keyring", 00:08:03.309 "config": [] 00:08:03.309 }, 00:08:03.309 { 00:08:03.309 "subsystem": "iobuf", 00:08:03.309 "config": [ 00:08:03.309 { 00:08:03.309 "method": "iobuf_set_options", 00:08:03.309 "params": { 00:08:03.309 "small_pool_count": 8192, 00:08:03.309 "large_pool_count": 1024, 00:08:03.309 "small_bufsize": 8192, 00:08:03.309 "large_bufsize": 135168, 00:08:03.309 "enable_numa": false 00:08:03.309 } 00:08:03.309 } 00:08:03.309 ] 00:08:03.309 }, 00:08:03.309 { 00:08:03.309 "subsystem": "sock", 00:08:03.309 "config": [ 00:08:03.309 { 00:08:03.309 "method": "sock_set_default_impl", 00:08:03.309 "params": { 00:08:03.309 "impl_name": "posix" 00:08:03.309 } 00:08:03.309 }, 00:08:03.309 { 00:08:03.309 "method": "sock_impl_set_options", 00:08:03.309 "params": { 00:08:03.309 "impl_name": "ssl", 00:08:03.309 "recv_buf_size": 4096, 00:08:03.309 "send_buf_size": 4096, 00:08:03.309 "enable_recv_pipe": true, 00:08:03.309 "enable_quickack": false, 00:08:03.309 "enable_placement_id": 0, 00:08:03.309 "enable_zerocopy_send_server": true, 00:08:03.309 "enable_zerocopy_send_client": false, 00:08:03.309 "zerocopy_threshold": 0, 00:08:03.309 "tls_version": 0, 00:08:03.309 "enable_ktls": false 00:08:03.309 } 00:08:03.309 }, 00:08:03.309 { 00:08:03.309 "method": "sock_impl_set_options", 00:08:03.309 "params": { 00:08:03.309 "impl_name": "posix", 00:08:03.309 "recv_buf_size": 2097152, 00:08:03.309 "send_buf_size": 2097152, 00:08:03.309 "enable_recv_pipe": true, 00:08:03.309 "enable_quickack": false, 00:08:03.309 "enable_placement_id": 0, 00:08:03.309 "enable_zerocopy_send_server": true, 00:08:03.309 "enable_zerocopy_send_client": false, 00:08:03.309 "zerocopy_threshold": 0, 00:08:03.309 "tls_version": 0, 00:08:03.309 "enable_ktls": false 00:08:03.309 } 00:08:03.309 } 00:08:03.309 ] 00:08:03.309 }, 00:08:03.309 { 00:08:03.309 "subsystem": "vmd", 00:08:03.309 "config": [] 00:08:03.309 }, 00:08:03.309 { 00:08:03.309 "subsystem": "accel", 00:08:03.309 "config": [ 00:08:03.309 { 00:08:03.309 "method": "accel_set_options", 00:08:03.309 "params": { 00:08:03.309 "small_cache_size": 128, 00:08:03.309 "large_cache_size": 16, 00:08:03.309 "task_count": 2048, 00:08:03.309 "sequence_count": 2048, 00:08:03.309 "buf_count": 2048 00:08:03.309 } 00:08:03.309 } 00:08:03.309 ] 00:08:03.309 }, 00:08:03.309 { 00:08:03.309 "subsystem": "bdev", 00:08:03.309 "config": [ 00:08:03.309 { 00:08:03.309 "method": "bdev_set_options", 00:08:03.309 "params": { 00:08:03.309 "bdev_io_pool_size": 65535, 00:08:03.309 "bdev_io_cache_size": 256, 00:08:03.309 "bdev_auto_examine": true, 00:08:03.309 "iobuf_small_cache_size": 128, 00:08:03.309 "iobuf_large_cache_size": 16 00:08:03.309 } 00:08:03.309 }, 00:08:03.309 { 00:08:03.309 "method": "bdev_raid_set_options", 00:08:03.309 "params": { 00:08:03.309 "process_window_size_kb": 1024, 00:08:03.310 "process_max_bandwidth_mb_sec": 0 00:08:03.310 } 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "method": "bdev_iscsi_set_options", 00:08:03.310 "params": { 00:08:03.310 "timeout_sec": 30 00:08:03.310 } 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "method": "bdev_nvme_set_options", 00:08:03.310 "params": { 00:08:03.310 "action_on_timeout": "none", 00:08:03.310 "timeout_us": 0, 00:08:03.310 "timeout_admin_us": 0, 00:08:03.310 "keep_alive_timeout_ms": 10000, 00:08:03.310 "arbitration_burst": 0, 00:08:03.310 "low_priority_weight": 0, 00:08:03.310 "medium_priority_weight": 0, 00:08:03.310 "high_priority_weight": 0, 00:08:03.310 "nvme_adminq_poll_period_us": 10000, 00:08:03.310 "nvme_ioq_poll_period_us": 0, 00:08:03.310 "io_queue_requests": 0, 00:08:03.310 "delay_cmd_submit": true, 00:08:03.310 "transport_retry_count": 4, 00:08:03.310 "bdev_retry_count": 3, 00:08:03.310 "transport_ack_timeout": 0, 00:08:03.310 "ctrlr_loss_timeout_sec": 0, 00:08:03.310 "reconnect_delay_sec": 0, 00:08:03.310 "fast_io_fail_timeout_sec": 0, 00:08:03.310 "disable_auto_failback": false, 00:08:03.310 "generate_uuids": false, 00:08:03.310 "transport_tos": 0, 00:08:03.310 "nvme_error_stat": false, 00:08:03.310 "rdma_srq_size": 0, 00:08:03.310 "io_path_stat": false, 00:08:03.310 "allow_accel_sequence": false, 00:08:03.310 "rdma_max_cq_size": 0, 00:08:03.310 "rdma_cm_event_timeout_ms": 0, 00:08:03.310 "dhchap_digests": [ 00:08:03.310 "sha256", 00:08:03.310 "sha384", 00:08:03.310 "sha512" 00:08:03.310 ], 00:08:03.310 "dhchap_dhgroups": [ 00:08:03.310 "null", 00:08:03.310 "ffdhe2048", 00:08:03.310 "ffdhe3072", 00:08:03.310 "ffdhe4096", 00:08:03.310 "ffdhe6144", 00:08:03.310 "ffdhe8192" 00:08:03.310 ] 00:08:03.310 } 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "method": "bdev_nvme_set_hotplug", 00:08:03.310 "params": { 00:08:03.310 "period_us": 100000, 00:08:03.310 "enable": false 00:08:03.310 } 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "method": "bdev_wait_for_examine" 00:08:03.310 } 00:08:03.310 ] 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "subsystem": "scsi", 00:08:03.310 "config": null 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "subsystem": "scheduler", 00:08:03.310 "config": [ 00:08:03.310 { 00:08:03.310 "method": "framework_set_scheduler", 00:08:03.310 "params": { 00:08:03.310 "name": "static" 00:08:03.310 } 00:08:03.310 } 00:08:03.310 ] 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "subsystem": "vhost_scsi", 00:08:03.310 "config": [] 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "subsystem": "vhost_blk", 00:08:03.310 "config": [] 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "subsystem": "ublk", 00:08:03.310 "config": [] 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "subsystem": "nbd", 00:08:03.310 "config": [] 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "subsystem": "nvmf", 00:08:03.310 "config": [ 00:08:03.310 { 00:08:03.310 "method": "nvmf_set_config", 00:08:03.310 "params": { 00:08:03.310 "discovery_filter": "match_any", 00:08:03.310 "admin_cmd_passthru": { 00:08:03.310 "identify_ctrlr": false 00:08:03.310 }, 00:08:03.310 "dhchap_digests": [ 00:08:03.310 "sha256", 00:08:03.310 "sha384", 00:08:03.310 "sha512" 00:08:03.310 ], 00:08:03.310 "dhchap_dhgroups": [ 00:08:03.310 "null", 00:08:03.310 "ffdhe2048", 00:08:03.310 "ffdhe3072", 00:08:03.310 "ffdhe4096", 00:08:03.310 "ffdhe6144", 00:08:03.310 "ffdhe8192" 00:08:03.310 ] 00:08:03.310 } 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "method": "nvmf_set_max_subsystems", 00:08:03.310 "params": { 00:08:03.310 "max_subsystems": 1024 00:08:03.310 } 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "method": "nvmf_set_crdt", 00:08:03.310 "params": { 00:08:03.310 "crdt1": 0, 00:08:03.310 "crdt2": 0, 00:08:03.310 "crdt3": 0 00:08:03.310 } 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "method": "nvmf_create_transport", 00:08:03.310 "params": { 00:08:03.310 "trtype": "TCP", 00:08:03.310 "max_queue_depth": 128, 00:08:03.310 "max_io_qpairs_per_ctrlr": 127, 00:08:03.310 "in_capsule_data_size": 4096, 00:08:03.310 "max_io_size": 131072, 00:08:03.310 "io_unit_size": 131072, 00:08:03.310 "max_aq_depth": 128, 00:08:03.310 "num_shared_buffers": 511, 00:08:03.310 "buf_cache_size": 4294967295, 00:08:03.310 "dif_insert_or_strip": false, 00:08:03.310 "zcopy": false, 00:08:03.310 "c2h_success": true, 00:08:03.310 "sock_priority": 0, 00:08:03.310 "abort_timeout_sec": 1, 00:08:03.310 "ack_timeout": 0, 00:08:03.310 "data_wr_pool_size": 0 00:08:03.310 } 00:08:03.310 } 00:08:03.310 ] 00:08:03.310 }, 00:08:03.310 { 00:08:03.310 "subsystem": "iscsi", 00:08:03.310 "config": [ 00:08:03.310 { 00:08:03.310 "method": "iscsi_set_options", 00:08:03.310 "params": { 00:08:03.310 "node_base": "iqn.2016-06.io.spdk", 00:08:03.310 "max_sessions": 128, 00:08:03.310 "max_connections_per_session": 2, 00:08:03.310 "max_queue_depth": 64, 00:08:03.310 "default_time2wait": 2, 00:08:03.310 "default_time2retain": 20, 00:08:03.310 "first_burst_length": 8192, 00:08:03.310 "immediate_data": true, 00:08:03.310 "allow_duplicated_isid": false, 00:08:03.310 "error_recovery_level": 0, 00:08:03.310 "nop_timeout": 60, 00:08:03.310 "nop_in_interval": 30, 00:08:03.310 "disable_chap": false, 00:08:03.310 "require_chap": false, 00:08:03.310 "mutual_chap": false, 00:08:03.310 "chap_group": 0, 00:08:03.310 "max_large_datain_per_connection": 64, 00:08:03.310 "max_r2t_per_connection": 4, 00:08:03.310 "pdu_pool_size": 36864, 00:08:03.310 "immediate_data_pool_size": 16384, 00:08:03.310 "data_out_pool_size": 2048 00:08:03.310 } 00:08:03.310 } 00:08:03.310 ] 00:08:03.310 } 00:08:03.310 ] 00:08:03.310 } 00:08:03.310 07:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:03.310 07:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57081 00:08:03.310 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57081 ']' 00:08:03.310 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57081 00:08:03.310 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:03.310 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.310 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57081 00:08:03.310 killing process with pid 57081 00:08:03.310 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.310 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.310 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57081' 00:08:03.310 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57081 00:08:03.310 07:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57081 00:08:05.839 07:05:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57137 00:08:05.839 07:05:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:05.839 07:05:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:11.118 07:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57137 00:08:11.118 07:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57137 ']' 00:08:11.118 07:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57137 00:08:11.118 07:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:11.118 07:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.118 07:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57137 00:08:11.118 killing process with pid 57137 00:08:11.118 07:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.118 07:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.118 07:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57137' 00:08:11.118 07:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57137 00:08:11.118 07:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57137 00:08:13.027 07:05:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:13.027 07:05:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:13.027 00:08:13.027 real 0m11.304s 00:08:13.027 user 0m10.888s 00:08:13.027 sys 0m1.070s 00:08:13.027 07:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.027 07:05:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:13.027 ************************************ 00:08:13.027 END TEST skip_rpc_with_json 00:08:13.027 ************************************ 00:08:13.027 07:05:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:13.027 07:05:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.027 07:05:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.027 07:05:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.027 ************************************ 00:08:13.027 START TEST skip_rpc_with_delay 00:08:13.027 ************************************ 00:08:13.027 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:13.027 07:05:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:13.027 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:13.027 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:13.027 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:13.027 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.027 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:13.027 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.028 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:13.028 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.028 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:13.028 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:13.028 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:13.028 [2024-11-20 07:05:10.243646] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:13.028 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:13.028 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.028 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:13.028 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.028 00:08:13.028 real 0m0.186s 00:08:13.028 user 0m0.096s 00:08:13.028 sys 0m0.088s 00:08:13.028 ************************************ 00:08:13.028 END TEST skip_rpc_with_delay 00:08:13.028 ************************************ 00:08:13.028 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.028 07:05:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:13.028 07:05:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:13.028 07:05:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:13.028 07:05:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:13.028 07:05:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.028 07:05:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.028 07:05:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:13.286 ************************************ 00:08:13.286 START TEST exit_on_failed_rpc_init 00:08:13.286 ************************************ 00:08:13.286 07:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:13.286 07:05:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57265 00:08:13.286 07:05:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:13.286 07:05:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57265 00:08:13.286 07:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57265 ']' 00:08:13.286 07:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.286 07:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.286 07:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.286 07:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.286 07:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:13.286 [2024-11-20 07:05:10.481234] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:08:13.286 [2024-11-20 07:05:10.481678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57265 ] 00:08:13.542 [2024-11-20 07:05:10.657341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.542 [2024-11-20 07:05:10.790128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:14.482 07:05:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:14.740 [2024-11-20 07:05:11.817348] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:08:14.740 [2024-11-20 07:05:11.817544] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57294 ] 00:08:14.740 [2024-11-20 07:05:12.017387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.998 [2024-11-20 07:05:12.183262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.998 [2024-11-20 07:05:12.183398] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:14.998 [2024-11-20 07:05:12.183422] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:14.998 [2024-11-20 07:05:12.183441] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57265 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57265 ']' 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57265 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57265 00:08:15.257 killing process with pid 57265 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57265' 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57265 00:08:15.257 07:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57265 00:08:17.785 00:08:17.785 real 0m4.362s 00:08:17.785 user 0m4.829s 00:08:17.785 sys 0m0.697s 00:08:17.785 07:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.785 07:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:17.785 ************************************ 00:08:17.785 END TEST exit_on_failed_rpc_init 00:08:17.785 ************************************ 00:08:17.785 07:05:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:17.785 00:08:17.785 real 0m23.536s 00:08:17.785 user 0m22.711s 00:08:17.785 sys 0m2.528s 00:08:17.785 ************************************ 00:08:17.785 END TEST skip_rpc 00:08:17.785 ************************************ 00:08:17.785 07:05:14 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.785 07:05:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:17.786 07:05:14 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:17.786 07:05:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:17.786 07:05:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.786 07:05:14 -- common/autotest_common.sh@10 -- # set +x 00:08:17.786 ************************************ 00:08:17.786 START TEST rpc_client 00:08:17.786 ************************************ 00:08:17.786 07:05:14 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:17.786 * Looking for test storage... 00:08:17.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:17.786 07:05:14 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:17.786 07:05:14 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:17.786 07:05:14 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:08:17.786 07:05:14 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.786 07:05:14 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:17.786 07:05:15 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:17.786 07:05:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.786 07:05:15 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:17.786 07:05:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.786 07:05:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.786 07:05:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.786 07:05:15 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:17.786 07:05:15 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.786 07:05:15 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:17.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.786 --rc genhtml_branch_coverage=1 00:08:17.786 --rc genhtml_function_coverage=1 00:08:17.786 --rc genhtml_legend=1 00:08:17.786 --rc geninfo_all_blocks=1 00:08:17.786 --rc geninfo_unexecuted_blocks=1 00:08:17.786 00:08:17.786 ' 00:08:17.786 07:05:15 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:17.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.786 --rc genhtml_branch_coverage=1 00:08:17.786 --rc genhtml_function_coverage=1 00:08:17.786 --rc genhtml_legend=1 00:08:17.786 --rc geninfo_all_blocks=1 00:08:17.786 --rc geninfo_unexecuted_blocks=1 00:08:17.786 00:08:17.786 ' 00:08:17.786 07:05:15 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:17.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.786 --rc genhtml_branch_coverage=1 00:08:17.786 --rc genhtml_function_coverage=1 00:08:17.786 --rc genhtml_legend=1 00:08:17.786 --rc geninfo_all_blocks=1 00:08:17.786 --rc geninfo_unexecuted_blocks=1 00:08:17.786 00:08:17.786 ' 00:08:17.786 07:05:15 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:17.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.786 --rc genhtml_branch_coverage=1 00:08:17.786 --rc genhtml_function_coverage=1 00:08:17.786 --rc genhtml_legend=1 00:08:17.786 --rc geninfo_all_blocks=1 00:08:17.786 --rc geninfo_unexecuted_blocks=1 00:08:17.786 00:08:17.786 ' 00:08:17.786 07:05:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:17.786 OK 00:08:17.786 07:05:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:17.786 00:08:17.786 real 0m0.275s 00:08:17.786 user 0m0.155s 00:08:17.786 sys 0m0.129s 00:08:17.786 ************************************ 00:08:17.786 END TEST rpc_client 00:08:17.786 ************************************ 00:08:17.786 07:05:15 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.786 07:05:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:18.044 07:05:15 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:18.044 07:05:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.044 07:05:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.044 07:05:15 -- common/autotest_common.sh@10 -- # set +x 00:08:18.044 ************************************ 00:08:18.044 START TEST json_config 00:08:18.044 ************************************ 00:08:18.044 07:05:15 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:18.044 07:05:15 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:18.044 07:05:15 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:08:18.044 07:05:15 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:18.044 07:05:15 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:18.044 07:05:15 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.044 07:05:15 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.044 07:05:15 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.044 07:05:15 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.044 07:05:15 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.044 07:05:15 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.044 07:05:15 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.044 07:05:15 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.044 07:05:15 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.044 07:05:15 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.044 07:05:15 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.044 07:05:15 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:18.044 07:05:15 json_config -- scripts/common.sh@345 -- # : 1 00:08:18.044 07:05:15 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.044 07:05:15 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.044 07:05:15 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:18.044 07:05:15 json_config -- scripts/common.sh@353 -- # local d=1 00:08:18.044 07:05:15 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.044 07:05:15 json_config -- scripts/common.sh@355 -- # echo 1 00:08:18.044 07:05:15 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.044 07:05:15 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:18.044 07:05:15 json_config -- scripts/common.sh@353 -- # local d=2 00:08:18.044 07:05:15 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.044 07:05:15 json_config -- scripts/common.sh@355 -- # echo 2 00:08:18.044 07:05:15 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.044 07:05:15 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.044 07:05:15 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.044 07:05:15 json_config -- scripts/common.sh@368 -- # return 0 00:08:18.044 07:05:15 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.044 07:05:15 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:18.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.044 --rc genhtml_branch_coverage=1 00:08:18.044 --rc genhtml_function_coverage=1 00:08:18.044 --rc genhtml_legend=1 00:08:18.044 --rc geninfo_all_blocks=1 00:08:18.044 --rc geninfo_unexecuted_blocks=1 00:08:18.044 00:08:18.044 ' 00:08:18.044 07:05:15 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:18.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.044 --rc genhtml_branch_coverage=1 00:08:18.044 --rc genhtml_function_coverage=1 00:08:18.044 --rc genhtml_legend=1 00:08:18.044 --rc geninfo_all_blocks=1 00:08:18.044 --rc geninfo_unexecuted_blocks=1 00:08:18.044 00:08:18.044 ' 00:08:18.044 07:05:15 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:18.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.045 --rc genhtml_branch_coverage=1 00:08:18.045 --rc genhtml_function_coverage=1 00:08:18.045 --rc genhtml_legend=1 00:08:18.045 --rc geninfo_all_blocks=1 00:08:18.045 --rc geninfo_unexecuted_blocks=1 00:08:18.045 00:08:18.045 ' 00:08:18.045 07:05:15 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:18.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.045 --rc genhtml_branch_coverage=1 00:08:18.045 --rc genhtml_function_coverage=1 00:08:18.045 --rc genhtml_legend=1 00:08:18.045 --rc geninfo_all_blocks=1 00:08:18.045 --rc geninfo_unexecuted_blocks=1 00:08:18.045 00:08:18.045 ' 00:08:18.045 07:05:15 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e7ce7ac-7390-4a29-9314-9b7b8205f111 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=2e7ce7ac-7390-4a29-9314-9b7b8205f111 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.045 07:05:15 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.045 07:05:15 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.045 07:05:15 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.045 07:05:15 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.045 07:05:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.045 07:05:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.045 07:05:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.045 07:05:15 json_config -- paths/export.sh@5 -- # export PATH 00:08:18.045 07:05:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:08:18.045 07:05:15 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:18.045 07:05:15 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:18.045 07:05:15 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@50 -- # : 0 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:18.045 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:18.045 07:05:15 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:18.045 07:05:15 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:18.045 07:05:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:18.045 07:05:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:18.045 07:05:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:18.045 07:05:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:18.045 07:05:15 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:18.045 WARNING: No tests are enabled so not running JSON configuration tests 00:08:18.045 07:05:15 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:18.045 00:08:18.045 real 0m0.192s 00:08:18.045 user 0m0.133s 00:08:18.045 sys 0m0.064s 00:08:18.045 07:05:15 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.045 07:05:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:18.045 ************************************ 00:08:18.045 END TEST json_config 00:08:18.045 ************************************ 00:08:18.304 07:05:15 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:18.304 07:05:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.304 07:05:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.304 07:05:15 -- common/autotest_common.sh@10 -- # set +x 00:08:18.304 ************************************ 00:08:18.304 START TEST json_config_extra_key 00:08:18.304 ************************************ 00:08:18.304 07:05:15 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:18.304 07:05:15 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:18.304 07:05:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:08:18.304 07:05:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:18.304 07:05:15 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.304 07:05:15 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:18.304 07:05:15 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.304 07:05:15 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:18.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.304 --rc genhtml_branch_coverage=1 00:08:18.304 --rc genhtml_function_coverage=1 00:08:18.304 --rc genhtml_legend=1 00:08:18.304 --rc geninfo_all_blocks=1 00:08:18.304 --rc geninfo_unexecuted_blocks=1 00:08:18.304 00:08:18.304 ' 00:08:18.304 07:05:15 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:18.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.304 --rc genhtml_branch_coverage=1 00:08:18.304 --rc genhtml_function_coverage=1 00:08:18.304 --rc genhtml_legend=1 00:08:18.304 --rc geninfo_all_blocks=1 00:08:18.304 --rc geninfo_unexecuted_blocks=1 00:08:18.304 00:08:18.304 ' 00:08:18.304 07:05:15 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:18.304 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.304 --rc genhtml_branch_coverage=1 00:08:18.304 --rc genhtml_function_coverage=1 00:08:18.304 --rc genhtml_legend=1 00:08:18.304 --rc geninfo_all_blocks=1 00:08:18.305 --rc geninfo_unexecuted_blocks=1 00:08:18.305 00:08:18.305 ' 00:08:18.305 07:05:15 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:18.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.305 --rc genhtml_branch_coverage=1 00:08:18.305 --rc genhtml_function_coverage=1 00:08:18.305 --rc genhtml_legend=1 00:08:18.305 --rc geninfo_all_blocks=1 00:08:18.305 --rc geninfo_unexecuted_blocks=1 00:08:18.305 00:08:18.305 ' 00:08:18.305 07:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2e7ce7ac-7390-4a29-9314-9b7b8205f111 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=2e7ce7ac-7390-4a29-9314-9b7b8205f111 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.305 07:05:15 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.305 07:05:15 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.305 07:05:15 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.305 07:05:15 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.305 07:05:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.305 07:05:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.305 07:05:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.305 07:05:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:18.305 07:05:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:18.305 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:18.305 07:05:15 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:18.305 INFO: launching applications... 00:08:18.305 07:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:18.305 07:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:18.305 07:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:18.305 07:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:18.305 07:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:18.305 07:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:18.305 07:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:18.305 07:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:18.305 07:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:18.305 07:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:18.305 07:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:18.305 07:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:18.305 07:05:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:18.305 07:05:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:18.305 07:05:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:18.305 07:05:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:18.305 07:05:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:18.305 07:05:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:18.305 07:05:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:18.305 07:05:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57499 00:08:18.305 07:05:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:18.305 Waiting for target to run... 00:08:18.305 07:05:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57499 /var/tmp/spdk_tgt.sock 00:08:18.305 07:05:15 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:18.305 07:05:15 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57499 ']' 00:08:18.305 07:05:15 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:18.305 07:05:15 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.305 07:05:15 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:18.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:18.305 07:05:15 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.305 07:05:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:18.564 [2024-11-20 07:05:15.707785] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:08:18.564 [2024-11-20 07:05:15.708428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57499 ] 00:08:19.130 [2024-11-20 07:05:16.190617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.130 [2024-11-20 07:05:16.328430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.065 00:08:20.065 INFO: shutting down applications... 00:08:20.065 07:05:17 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.065 07:05:17 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:20.065 07:05:17 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:20.065 07:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:20.065 07:05:17 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:20.065 07:05:17 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:20.065 07:05:17 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:20.065 07:05:17 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57499 ]] 00:08:20.065 07:05:17 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57499 00:08:20.065 07:05:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:20.065 07:05:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:20.065 07:05:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57499 00:08:20.065 07:05:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:20.322 07:05:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:20.322 07:05:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:20.322 07:05:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57499 00:08:20.322 07:05:17 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:20.889 07:05:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:20.889 07:05:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:20.889 07:05:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57499 00:08:20.889 07:05:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:21.457 07:05:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:21.457 07:05:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:21.457 07:05:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57499 00:08:21.457 07:05:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:21.790 07:05:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:21.790 07:05:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:21.790 07:05:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57499 00:08:21.790 07:05:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:22.371 07:05:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:22.371 07:05:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:22.371 07:05:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57499 00:08:22.371 07:05:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:22.939 07:05:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:22.939 07:05:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:22.939 07:05:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57499 00:08:22.939 SPDK target shutdown done 00:08:22.939 Success 00:08:22.939 07:05:20 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:22.939 07:05:20 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:22.939 07:05:20 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:22.939 07:05:20 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:22.939 07:05:20 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:22.939 00:08:22.939 real 0m4.686s 00:08:22.939 user 0m4.090s 00:08:22.939 sys 0m0.682s 00:08:22.939 ************************************ 00:08:22.939 END TEST json_config_extra_key 00:08:22.939 ************************************ 00:08:22.939 07:05:20 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.939 07:05:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:22.939 07:05:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:22.939 07:05:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.939 07:05:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.939 07:05:20 -- common/autotest_common.sh@10 -- # set +x 00:08:22.939 ************************************ 00:08:22.939 START TEST alias_rpc 00:08:22.939 ************************************ 00:08:22.939 07:05:20 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:22.939 * Looking for test storage... 00:08:22.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:22.939 07:05:20 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.939 07:05:20 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:22.939 07:05:20 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:23.198 07:05:20 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.198 07:05:20 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:23.198 07:05:20 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.198 07:05:20 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:23.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.198 --rc genhtml_branch_coverage=1 00:08:23.198 --rc genhtml_function_coverage=1 00:08:23.198 --rc genhtml_legend=1 00:08:23.198 --rc geninfo_all_blocks=1 00:08:23.198 --rc geninfo_unexecuted_blocks=1 00:08:23.198 00:08:23.198 ' 00:08:23.198 07:05:20 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:23.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.198 --rc genhtml_branch_coverage=1 00:08:23.198 --rc genhtml_function_coverage=1 00:08:23.198 --rc genhtml_legend=1 00:08:23.198 --rc geninfo_all_blocks=1 00:08:23.198 --rc geninfo_unexecuted_blocks=1 00:08:23.198 00:08:23.198 ' 00:08:23.198 07:05:20 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:23.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.198 --rc genhtml_branch_coverage=1 00:08:23.198 --rc genhtml_function_coverage=1 00:08:23.198 --rc genhtml_legend=1 00:08:23.198 --rc geninfo_all_blocks=1 00:08:23.198 --rc geninfo_unexecuted_blocks=1 00:08:23.198 00:08:23.198 ' 00:08:23.198 07:05:20 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:23.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.198 --rc genhtml_branch_coverage=1 00:08:23.198 --rc genhtml_function_coverage=1 00:08:23.198 --rc genhtml_legend=1 00:08:23.198 --rc geninfo_all_blocks=1 00:08:23.198 --rc geninfo_unexecuted_blocks=1 00:08:23.198 00:08:23.198 ' 00:08:23.198 07:05:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:23.198 07:05:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57610 00:08:23.198 07:05:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:23.198 07:05:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57610 00:08:23.198 07:05:20 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57610 ']' 00:08:23.198 07:05:20 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.198 07:05:20 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:23.198 07:05:20 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.198 07:05:20 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:23.198 07:05:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:23.198 [2024-11-20 07:05:20.445544] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:08:23.198 [2024-11-20 07:05:20.446010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57610 ] 00:08:23.457 [2024-11-20 07:05:20.632017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.457 [2024-11-20 07:05:20.760465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.392 07:05:21 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.392 07:05:21 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:24.392 07:05:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:24.651 07:05:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57610 00:08:24.651 07:05:21 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57610 ']' 00:08:24.651 07:05:21 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57610 00:08:24.651 07:05:21 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:24.651 07:05:21 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.651 07:05:21 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57610 00:08:24.909 killing process with pid 57610 00:08:24.909 07:05:21 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.909 07:05:21 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.909 07:05:21 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57610' 00:08:24.909 07:05:21 alias_rpc -- common/autotest_common.sh@973 -- # kill 57610 00:08:24.909 07:05:21 alias_rpc -- common/autotest_common.sh@978 -- # wait 57610 00:08:27.442 ************************************ 00:08:27.442 END TEST alias_rpc 00:08:27.442 ************************************ 00:08:27.442 00:08:27.442 real 0m4.125s 00:08:27.442 user 0m4.230s 00:08:27.442 sys 0m0.636s 00:08:27.442 07:05:24 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.442 07:05:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:27.442 07:05:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:27.442 07:05:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:27.442 07:05:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.442 07:05:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.442 07:05:24 -- common/autotest_common.sh@10 -- # set +x 00:08:27.442 ************************************ 00:08:27.442 START TEST spdkcli_tcp 00:08:27.442 ************************************ 00:08:27.442 07:05:24 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:27.442 * Looking for test storage... 00:08:27.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:27.442 07:05:24 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:27.442 07:05:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:27.442 07:05:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:27.442 07:05:24 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:27.442 07:05:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:27.443 07:05:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.443 07:05:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:27.443 07:05:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.443 07:05:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.443 07:05:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.443 07:05:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:27.443 07:05:24 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.443 07:05:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:27.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.443 --rc genhtml_branch_coverage=1 00:08:27.443 --rc genhtml_function_coverage=1 00:08:27.443 --rc genhtml_legend=1 00:08:27.443 --rc geninfo_all_blocks=1 00:08:27.443 --rc geninfo_unexecuted_blocks=1 00:08:27.443 00:08:27.443 ' 00:08:27.443 07:05:24 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:27.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.443 --rc genhtml_branch_coverage=1 00:08:27.443 --rc genhtml_function_coverage=1 00:08:27.443 --rc genhtml_legend=1 00:08:27.443 --rc geninfo_all_blocks=1 00:08:27.443 --rc geninfo_unexecuted_blocks=1 00:08:27.443 00:08:27.443 ' 00:08:27.443 07:05:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:27.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.443 --rc genhtml_branch_coverage=1 00:08:27.443 --rc genhtml_function_coverage=1 00:08:27.443 --rc genhtml_legend=1 00:08:27.443 --rc geninfo_all_blocks=1 00:08:27.443 --rc geninfo_unexecuted_blocks=1 00:08:27.443 00:08:27.443 ' 00:08:27.443 07:05:24 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:27.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.443 --rc genhtml_branch_coverage=1 00:08:27.443 --rc genhtml_function_coverage=1 00:08:27.443 --rc genhtml_legend=1 00:08:27.443 --rc geninfo_all_blocks=1 00:08:27.443 --rc geninfo_unexecuted_blocks=1 00:08:27.443 00:08:27.443 ' 00:08:27.443 07:05:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:27.443 07:05:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:27.443 07:05:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:27.443 07:05:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:27.443 07:05:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:27.443 07:05:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:27.443 07:05:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:27.443 07:05:24 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:27.443 07:05:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.443 07:05:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57717 00:08:27.443 07:05:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:27.443 07:05:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57717 00:08:27.443 07:05:24 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57717 ']' 00:08:27.443 07:05:24 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.443 07:05:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.443 07:05:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.443 07:05:24 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.443 07:05:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.443 [2024-11-20 07:05:24.610936] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:08:27.443 [2024-11-20 07:05:24.611438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57717 ] 00:08:27.707 [2024-11-20 07:05:24.795151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:27.707 [2024-11-20 07:05:24.925115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.707 [2024-11-20 07:05:24.925135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.642 07:05:25 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.642 07:05:25 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:28.642 07:05:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57740 00:08:28.642 07:05:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:28.642 07:05:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:28.900 [ 00:08:28.900 "bdev_malloc_delete", 00:08:28.900 "bdev_malloc_create", 00:08:28.900 "bdev_null_resize", 00:08:28.900 "bdev_null_delete", 00:08:28.900 "bdev_null_create", 00:08:28.900 "bdev_nvme_cuse_unregister", 00:08:28.900 "bdev_nvme_cuse_register", 00:08:28.900 "bdev_opal_new_user", 00:08:28.900 "bdev_opal_set_lock_state", 00:08:28.900 "bdev_opal_delete", 00:08:28.900 "bdev_opal_get_info", 00:08:28.900 "bdev_opal_create", 00:08:28.901 "bdev_nvme_opal_revert", 00:08:28.901 "bdev_nvme_opal_init", 00:08:28.901 "bdev_nvme_send_cmd", 00:08:28.901 "bdev_nvme_set_keys", 00:08:28.901 "bdev_nvme_get_path_iostat", 00:08:28.901 "bdev_nvme_get_mdns_discovery_info", 00:08:28.901 "bdev_nvme_stop_mdns_discovery", 00:08:28.901 "bdev_nvme_start_mdns_discovery", 00:08:28.901 "bdev_nvme_set_multipath_policy", 00:08:28.901 "bdev_nvme_set_preferred_path", 00:08:28.901 "bdev_nvme_get_io_paths", 00:08:28.901 "bdev_nvme_remove_error_injection", 00:08:28.901 "bdev_nvme_add_error_injection", 00:08:28.901 "bdev_nvme_get_discovery_info", 00:08:28.901 "bdev_nvme_stop_discovery", 00:08:28.901 "bdev_nvme_start_discovery", 00:08:28.901 "bdev_nvme_get_controller_health_info", 00:08:28.901 "bdev_nvme_disable_controller", 00:08:28.901 "bdev_nvme_enable_controller", 00:08:28.901 "bdev_nvme_reset_controller", 00:08:28.901 "bdev_nvme_get_transport_statistics", 00:08:28.901 "bdev_nvme_apply_firmware", 00:08:28.901 "bdev_nvme_detach_controller", 00:08:28.901 "bdev_nvme_get_controllers", 00:08:28.901 "bdev_nvme_attach_controller", 00:08:28.901 "bdev_nvme_set_hotplug", 00:08:28.901 "bdev_nvme_set_options", 00:08:28.901 "bdev_passthru_delete", 00:08:28.901 "bdev_passthru_create", 00:08:28.901 "bdev_lvol_set_parent_bdev", 00:08:28.901 "bdev_lvol_set_parent", 00:08:28.901 "bdev_lvol_check_shallow_copy", 00:08:28.901 "bdev_lvol_start_shallow_copy", 00:08:28.901 "bdev_lvol_grow_lvstore", 00:08:28.901 "bdev_lvol_get_lvols", 00:08:28.901 "bdev_lvol_get_lvstores", 00:08:28.901 "bdev_lvol_delete", 00:08:28.901 "bdev_lvol_set_read_only", 00:08:28.901 "bdev_lvol_resize", 00:08:28.901 "bdev_lvol_decouple_parent", 00:08:28.901 "bdev_lvol_inflate", 00:08:28.901 "bdev_lvol_rename", 00:08:28.901 "bdev_lvol_clone_bdev", 00:08:28.901 "bdev_lvol_clone", 00:08:28.901 "bdev_lvol_snapshot", 00:08:28.901 "bdev_lvol_create", 00:08:28.901 "bdev_lvol_delete_lvstore", 00:08:28.901 "bdev_lvol_rename_lvstore", 00:08:28.901 "bdev_lvol_create_lvstore", 00:08:28.901 "bdev_raid_set_options", 00:08:28.901 "bdev_raid_remove_base_bdev", 00:08:28.901 "bdev_raid_add_base_bdev", 00:08:28.901 "bdev_raid_delete", 00:08:28.901 "bdev_raid_create", 00:08:28.901 "bdev_raid_get_bdevs", 00:08:28.901 "bdev_error_inject_error", 00:08:28.901 "bdev_error_delete", 00:08:28.901 "bdev_error_create", 00:08:28.901 "bdev_split_delete", 00:08:28.901 "bdev_split_create", 00:08:28.901 "bdev_delay_delete", 00:08:28.901 "bdev_delay_create", 00:08:28.901 "bdev_delay_update_latency", 00:08:28.901 "bdev_zone_block_delete", 00:08:28.901 "bdev_zone_block_create", 00:08:28.901 "blobfs_create", 00:08:28.901 "blobfs_detect", 00:08:28.901 "blobfs_set_cache_size", 00:08:28.901 "bdev_aio_delete", 00:08:28.901 "bdev_aio_rescan", 00:08:28.901 "bdev_aio_create", 00:08:28.901 "bdev_ftl_set_property", 00:08:28.901 "bdev_ftl_get_properties", 00:08:28.901 "bdev_ftl_get_stats", 00:08:28.901 "bdev_ftl_unmap", 00:08:28.901 "bdev_ftl_unload", 00:08:28.901 "bdev_ftl_delete", 00:08:28.901 "bdev_ftl_load", 00:08:28.901 "bdev_ftl_create", 00:08:28.901 "bdev_virtio_attach_controller", 00:08:28.901 "bdev_virtio_scsi_get_devices", 00:08:28.901 "bdev_virtio_detach_controller", 00:08:28.901 "bdev_virtio_blk_set_hotplug", 00:08:28.901 "bdev_iscsi_delete", 00:08:28.901 "bdev_iscsi_create", 00:08:28.901 "bdev_iscsi_set_options", 00:08:28.901 "accel_error_inject_error", 00:08:28.901 "ioat_scan_accel_module", 00:08:28.901 "dsa_scan_accel_module", 00:08:28.901 "iaa_scan_accel_module", 00:08:28.901 "keyring_file_remove_key", 00:08:28.901 "keyring_file_add_key", 00:08:28.901 "keyring_linux_set_options", 00:08:28.901 "fsdev_aio_delete", 00:08:28.901 "fsdev_aio_create", 00:08:28.901 "iscsi_get_histogram", 00:08:28.901 "iscsi_enable_histogram", 00:08:28.901 "iscsi_set_options", 00:08:28.901 "iscsi_get_auth_groups", 00:08:28.901 "iscsi_auth_group_remove_secret", 00:08:28.901 "iscsi_auth_group_add_secret", 00:08:28.901 "iscsi_delete_auth_group", 00:08:28.901 "iscsi_create_auth_group", 00:08:28.901 "iscsi_set_discovery_auth", 00:08:28.901 "iscsi_get_options", 00:08:28.901 "iscsi_target_node_request_logout", 00:08:28.901 "iscsi_target_node_set_redirect", 00:08:28.901 "iscsi_target_node_set_auth", 00:08:28.901 "iscsi_target_node_add_lun", 00:08:28.901 "iscsi_get_stats", 00:08:28.901 "iscsi_get_connections", 00:08:28.901 "iscsi_portal_group_set_auth", 00:08:28.901 "iscsi_start_portal_group", 00:08:28.901 "iscsi_delete_portal_group", 00:08:28.901 "iscsi_create_portal_group", 00:08:28.901 "iscsi_get_portal_groups", 00:08:28.901 "iscsi_delete_target_node", 00:08:28.901 "iscsi_target_node_remove_pg_ig_maps", 00:08:28.901 "iscsi_target_node_add_pg_ig_maps", 00:08:28.901 "iscsi_create_target_node", 00:08:28.901 "iscsi_get_target_nodes", 00:08:28.901 "iscsi_delete_initiator_group", 00:08:28.901 "iscsi_initiator_group_remove_initiators", 00:08:28.901 "iscsi_initiator_group_add_initiators", 00:08:28.901 "iscsi_create_initiator_group", 00:08:28.901 "iscsi_get_initiator_groups", 00:08:28.901 "nvmf_set_crdt", 00:08:28.901 "nvmf_set_config", 00:08:28.901 "nvmf_set_max_subsystems", 00:08:28.901 "nvmf_stop_mdns_prr", 00:08:28.901 "nvmf_publish_mdns_prr", 00:08:28.901 "nvmf_subsystem_get_listeners", 00:08:28.901 "nvmf_subsystem_get_qpairs", 00:08:28.901 "nvmf_subsystem_get_controllers", 00:08:28.901 "nvmf_get_stats", 00:08:28.901 "nvmf_get_transports", 00:08:28.901 "nvmf_create_transport", 00:08:28.901 "nvmf_get_targets", 00:08:28.901 "nvmf_delete_target", 00:08:28.901 "nvmf_create_target", 00:08:28.901 "nvmf_subsystem_allow_any_host", 00:08:28.901 "nvmf_subsystem_set_keys", 00:08:28.901 "nvmf_subsystem_remove_host", 00:08:28.901 "nvmf_subsystem_add_host", 00:08:28.901 "nvmf_ns_remove_host", 00:08:28.901 "nvmf_ns_add_host", 00:08:28.901 "nvmf_subsystem_remove_ns", 00:08:28.901 "nvmf_subsystem_set_ns_ana_group", 00:08:28.901 "nvmf_subsystem_add_ns", 00:08:28.901 "nvmf_subsystem_listener_set_ana_state", 00:08:28.901 "nvmf_discovery_get_referrals", 00:08:28.901 "nvmf_discovery_remove_referral", 00:08:28.901 "nvmf_discovery_add_referral", 00:08:28.901 "nvmf_subsystem_remove_listener", 00:08:28.901 "nvmf_subsystem_add_listener", 00:08:28.901 "nvmf_delete_subsystem", 00:08:28.901 "nvmf_create_subsystem", 00:08:28.901 "nvmf_get_subsystems", 00:08:28.901 "env_dpdk_get_mem_stats", 00:08:28.901 "nbd_get_disks", 00:08:28.901 "nbd_stop_disk", 00:08:28.901 "nbd_start_disk", 00:08:28.901 "ublk_recover_disk", 00:08:28.901 "ublk_get_disks", 00:08:28.901 "ublk_stop_disk", 00:08:28.901 "ublk_start_disk", 00:08:28.901 "ublk_destroy_target", 00:08:28.901 "ublk_create_target", 00:08:28.901 "virtio_blk_create_transport", 00:08:28.901 "virtio_blk_get_transports", 00:08:28.901 "vhost_controller_set_coalescing", 00:08:28.901 "vhost_get_controllers", 00:08:28.901 "vhost_delete_controller", 00:08:28.901 "vhost_create_blk_controller", 00:08:28.901 "vhost_scsi_controller_remove_target", 00:08:28.901 "vhost_scsi_controller_add_target", 00:08:28.901 "vhost_start_scsi_controller", 00:08:28.901 "vhost_create_scsi_controller", 00:08:28.901 "thread_set_cpumask", 00:08:28.901 "scheduler_set_options", 00:08:28.901 "framework_get_governor", 00:08:28.901 "framework_get_scheduler", 00:08:28.901 "framework_set_scheduler", 00:08:28.901 "framework_get_reactors", 00:08:28.901 "thread_get_io_channels", 00:08:28.901 "thread_get_pollers", 00:08:28.901 "thread_get_stats", 00:08:28.901 "framework_monitor_context_switch", 00:08:28.901 "spdk_kill_instance", 00:08:28.901 "log_enable_timestamps", 00:08:28.901 "log_get_flags", 00:08:28.901 "log_clear_flag", 00:08:28.901 "log_set_flag", 00:08:28.901 "log_get_level", 00:08:28.901 "log_set_level", 00:08:28.901 "log_get_print_level", 00:08:28.901 "log_set_print_level", 00:08:28.901 "framework_enable_cpumask_locks", 00:08:28.901 "framework_disable_cpumask_locks", 00:08:28.901 "framework_wait_init", 00:08:28.901 "framework_start_init", 00:08:28.901 "scsi_get_devices", 00:08:28.901 "bdev_get_histogram", 00:08:28.901 "bdev_enable_histogram", 00:08:28.901 "bdev_set_qos_limit", 00:08:28.901 "bdev_set_qd_sampling_period", 00:08:28.901 "bdev_get_bdevs", 00:08:28.901 "bdev_reset_iostat", 00:08:28.901 "bdev_get_iostat", 00:08:28.901 "bdev_examine", 00:08:28.901 "bdev_wait_for_examine", 00:08:28.901 "bdev_set_options", 00:08:28.901 "accel_get_stats", 00:08:28.901 "accel_set_options", 00:08:28.901 "accel_set_driver", 00:08:28.901 "accel_crypto_key_destroy", 00:08:28.901 "accel_crypto_keys_get", 00:08:28.901 "accel_crypto_key_create", 00:08:28.901 "accel_assign_opc", 00:08:28.901 "accel_get_module_info", 00:08:28.901 "accel_get_opc_assignments", 00:08:28.901 "vmd_rescan", 00:08:28.901 "vmd_remove_device", 00:08:28.901 "vmd_enable", 00:08:28.901 "sock_get_default_impl", 00:08:28.901 "sock_set_default_impl", 00:08:28.901 "sock_impl_set_options", 00:08:28.901 "sock_impl_get_options", 00:08:28.901 "iobuf_get_stats", 00:08:28.901 "iobuf_set_options", 00:08:28.901 "keyring_get_keys", 00:08:28.901 "framework_get_pci_devices", 00:08:28.901 "framework_get_config", 00:08:28.901 "framework_get_subsystems", 00:08:28.901 "fsdev_set_opts", 00:08:28.901 "fsdev_get_opts", 00:08:28.902 "trace_get_info", 00:08:28.902 "trace_get_tpoint_group_mask", 00:08:28.902 "trace_disable_tpoint_group", 00:08:28.902 "trace_enable_tpoint_group", 00:08:28.902 "trace_clear_tpoint_mask", 00:08:28.902 "trace_set_tpoint_mask", 00:08:28.902 "notify_get_notifications", 00:08:28.902 "notify_get_types", 00:08:28.902 "spdk_get_version", 00:08:28.902 "rpc_get_methods" 00:08:28.902 ] 00:08:28.902 07:05:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:28.902 07:05:26 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.902 07:05:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.902 07:05:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:28.902 07:05:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57717 00:08:28.902 07:05:26 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57717 ']' 00:08:28.902 07:05:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57717 00:08:28.902 07:05:26 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:28.902 07:05:26 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.902 07:05:26 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57717 00:08:29.160 killing process with pid 57717 00:08:29.160 07:05:26 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.160 07:05:26 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.160 07:05:26 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57717' 00:08:29.160 07:05:26 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57717 00:08:29.160 07:05:26 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57717 00:08:31.690 ************************************ 00:08:31.690 END TEST spdkcli_tcp 00:08:31.690 ************************************ 00:08:31.690 00:08:31.690 real 0m4.111s 00:08:31.690 user 0m7.510s 00:08:31.690 sys 0m0.650s 00:08:31.690 07:05:28 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.690 07:05:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:31.690 07:05:28 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:31.690 07:05:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.690 07:05:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.690 07:05:28 -- common/autotest_common.sh@10 -- # set +x 00:08:31.690 ************************************ 00:08:31.690 START TEST dpdk_mem_utility 00:08:31.690 ************************************ 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:31.690 * Looking for test storage... 00:08:31.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:31.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:31.690 07:05:28 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:31.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.690 --rc genhtml_branch_coverage=1 00:08:31.690 --rc genhtml_function_coverage=1 00:08:31.690 --rc genhtml_legend=1 00:08:31.690 --rc geninfo_all_blocks=1 00:08:31.690 --rc geninfo_unexecuted_blocks=1 00:08:31.690 00:08:31.690 ' 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:31.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.690 --rc genhtml_branch_coverage=1 00:08:31.690 --rc genhtml_function_coverage=1 00:08:31.690 --rc genhtml_legend=1 00:08:31.690 --rc geninfo_all_blocks=1 00:08:31.690 --rc geninfo_unexecuted_blocks=1 00:08:31.690 00:08:31.690 ' 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:31.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.690 --rc genhtml_branch_coverage=1 00:08:31.690 --rc genhtml_function_coverage=1 00:08:31.690 --rc genhtml_legend=1 00:08:31.690 --rc geninfo_all_blocks=1 00:08:31.690 --rc geninfo_unexecuted_blocks=1 00:08:31.690 00:08:31.690 ' 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:31.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:31.690 --rc genhtml_branch_coverage=1 00:08:31.690 --rc genhtml_function_coverage=1 00:08:31.690 --rc genhtml_legend=1 00:08:31.690 --rc geninfo_all_blocks=1 00:08:31.690 --rc geninfo_unexecuted_blocks=1 00:08:31.690 00:08:31.690 ' 00:08:31.690 07:05:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:31.690 07:05:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57839 00:08:31.690 07:05:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:31.690 07:05:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57839 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57839 ']' 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.690 07:05:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:31.690 [2024-11-20 07:05:28.786458] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:08:31.690 [2024-11-20 07:05:28.786837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57839 ] 00:08:31.690 [2024-11-20 07:05:28.961571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.949 [2024-11-20 07:05:29.132900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.887 07:05:29 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.887 07:05:29 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:32.887 07:05:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:32.887 07:05:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:32.887 07:05:29 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.887 07:05:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:32.887 { 00:08:32.887 "filename": "/tmp/spdk_mem_dump.txt" 00:08:32.887 } 00:08:32.887 07:05:29 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.887 07:05:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:32.887 DPDK memory size 816.000000 MiB in 1 heap(s) 00:08:32.887 1 heaps totaling size 816.000000 MiB 00:08:32.887 size: 816.000000 MiB heap id: 0 00:08:32.887 end heaps---------- 00:08:32.887 9 mempools totaling size 595.772034 MiB 00:08:32.887 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:32.887 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:32.887 size: 92.545471 MiB name: bdev_io_57839 00:08:32.887 size: 50.003479 MiB name: msgpool_57839 00:08:32.887 size: 36.509338 MiB name: fsdev_io_57839 00:08:32.887 size: 21.763794 MiB name: PDU_Pool 00:08:32.887 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:32.887 size: 4.133484 MiB name: evtpool_57839 00:08:32.887 size: 0.026123 MiB name: Session_Pool 00:08:32.887 end mempools------- 00:08:32.887 6 memzones totaling size 4.142822 MiB 00:08:32.887 size: 1.000366 MiB name: RG_ring_0_57839 00:08:32.887 size: 1.000366 MiB name: RG_ring_1_57839 00:08:32.887 size: 1.000366 MiB name: RG_ring_4_57839 00:08:32.887 size: 1.000366 MiB name: RG_ring_5_57839 00:08:32.887 size: 0.125366 MiB name: RG_ring_2_57839 00:08:32.887 size: 0.015991 MiB name: RG_ring_3_57839 00:08:32.887 end memzones------- 00:08:32.887 07:05:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:32.887 heap id: 0 total size: 816.000000 MiB number of busy elements: 318 number of free elements: 18 00:08:32.887 list of free elements. size: 16.790649 MiB 00:08:32.887 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:32.887 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:32.887 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:32.887 element at address: 0x200018d00040 with size: 0.999939 MiB 00:08:32.887 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:32.887 element at address: 0x200019200000 with size: 0.999084 MiB 00:08:32.887 element at address: 0x200031e00000 with size: 0.994324 MiB 00:08:32.887 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:32.887 element at address: 0x200018a00000 with size: 0.959656 MiB 00:08:32.887 element at address: 0x200019500040 with size: 0.936401 MiB 00:08:32.887 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:32.887 element at address: 0x20001ac00000 with size: 0.560974 MiB 00:08:32.887 element at address: 0x200000c00000 with size: 0.490173 MiB 00:08:32.887 element at address: 0x200018e00000 with size: 0.487976 MiB 00:08:32.887 element at address: 0x200019600000 with size: 0.485413 MiB 00:08:32.887 element at address: 0x200012c00000 with size: 0.443481 MiB 00:08:32.887 element at address: 0x200028000000 with size: 0.390442 MiB 00:08:32.887 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:32.887 list of standard malloc elements. size: 199.288452 MiB 00:08:32.887 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:32.887 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:32.887 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:08:32.887 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:32.887 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:32.887 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:32.887 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:08:32.887 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:32.887 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:32.887 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:08:32.887 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:32.887 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:32.887 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:32.888 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012c71880 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012c71980 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012c72080 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012c72180 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:32.888 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:08:32.888 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200028063f40 with size: 0.000244 MiB 00:08:32.888 element at address: 0x200028064040 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806af80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806b080 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806b180 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806b280 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806b380 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806b480 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806b580 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806b680 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806b780 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806b880 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806b980 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:08:32.888 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806be80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806c080 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806c180 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806c280 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806c380 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806c480 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806c580 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806c680 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806c780 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806c880 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806c980 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806d080 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806d180 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806d280 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806d380 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806d480 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806d580 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806d680 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806d780 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806d880 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806d980 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806da80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806db80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806de80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806df80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806e080 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806e180 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806e280 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806e380 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806e480 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806e580 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806e680 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806e780 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806e880 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806e980 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806f080 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806f180 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806f280 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806f380 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806f480 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806f580 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806f680 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806f780 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806f880 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806f980 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:08:32.889 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:08:32.889 list of memzone associated elements. size: 599.920898 MiB 00:08:32.889 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:08:32.889 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:32.889 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:08:32.889 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:32.889 element at address: 0x200012df4740 with size: 92.045105 MiB 00:08:32.889 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57839_0 00:08:32.889 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:32.889 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57839_0 00:08:32.889 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:32.889 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57839_0 00:08:32.889 element at address: 0x2000197be900 with size: 20.255615 MiB 00:08:32.889 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:32.889 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:08:32.889 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:32.889 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:32.889 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57839_0 00:08:32.889 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:32.889 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57839 00:08:32.889 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:32.889 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57839 00:08:32.889 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:32.889 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:32.889 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:08:32.889 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:32.889 element at address: 0x200018afde00 with size: 1.008179 MiB 00:08:32.889 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:32.889 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:08:32.889 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:32.889 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:32.889 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57839 00:08:32.889 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:32.889 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57839 00:08:32.889 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:08:32.889 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57839 00:08:32.889 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:08:32.889 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57839 00:08:32.889 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:32.889 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57839 00:08:32.889 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:32.889 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57839 00:08:32.889 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:08:32.889 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:32.889 element at address: 0x200012c72280 with size: 0.500549 MiB 00:08:32.889 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:32.889 element at address: 0x20001967c440 with size: 0.250549 MiB 00:08:32.889 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:32.889 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:32.889 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57839 00:08:32.889 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:32.889 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57839 00:08:32.889 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:08:32.889 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:32.889 element at address: 0x200028064140 with size: 0.023804 MiB 00:08:32.889 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:32.889 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:32.889 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57839 00:08:32.889 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:08:32.889 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:32.889 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:32.889 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57839 00:08:32.889 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:32.889 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57839 00:08:32.889 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:32.889 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57839 00:08:32.889 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:08:32.889 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:32.889 07:05:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:32.889 07:05:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57839 00:08:32.889 07:05:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57839 ']' 00:08:32.889 07:05:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57839 00:08:32.889 07:05:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:32.889 07:05:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.889 07:05:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57839 00:08:32.889 killing process with pid 57839 00:08:32.889 07:05:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.889 07:05:30 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.889 07:05:30 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57839' 00:08:32.889 07:05:30 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57839 00:08:32.889 07:05:30 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57839 00:08:35.429 00:08:35.429 real 0m3.913s 00:08:35.429 user 0m3.959s 00:08:35.429 sys 0m0.601s 00:08:35.429 07:05:32 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.429 ************************************ 00:08:35.429 END TEST dpdk_mem_utility 00:08:35.429 ************************************ 00:08:35.429 07:05:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:35.429 07:05:32 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:35.429 07:05:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.429 07:05:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.429 07:05:32 -- common/autotest_common.sh@10 -- # set +x 00:08:35.429 ************************************ 00:08:35.429 START TEST event 00:08:35.429 ************************************ 00:08:35.429 07:05:32 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:35.429 * Looking for test storage... 00:08:35.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:35.429 07:05:32 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:35.429 07:05:32 event -- common/autotest_common.sh@1693 -- # lcov --version 00:08:35.429 07:05:32 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:35.429 07:05:32 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:35.430 07:05:32 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.430 07:05:32 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.430 07:05:32 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.430 07:05:32 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.430 07:05:32 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.430 07:05:32 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.430 07:05:32 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.430 07:05:32 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.430 07:05:32 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.430 07:05:32 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.430 07:05:32 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.430 07:05:32 event -- scripts/common.sh@344 -- # case "$op" in 00:08:35.430 07:05:32 event -- scripts/common.sh@345 -- # : 1 00:08:35.430 07:05:32 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.430 07:05:32 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.430 07:05:32 event -- scripts/common.sh@365 -- # decimal 1 00:08:35.430 07:05:32 event -- scripts/common.sh@353 -- # local d=1 00:08:35.430 07:05:32 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.430 07:05:32 event -- scripts/common.sh@355 -- # echo 1 00:08:35.430 07:05:32 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.430 07:05:32 event -- scripts/common.sh@366 -- # decimal 2 00:08:35.430 07:05:32 event -- scripts/common.sh@353 -- # local d=2 00:08:35.430 07:05:32 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.430 07:05:32 event -- scripts/common.sh@355 -- # echo 2 00:08:35.430 07:05:32 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.430 07:05:32 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.430 07:05:32 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.430 07:05:32 event -- scripts/common.sh@368 -- # return 0 00:08:35.430 07:05:32 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.430 07:05:32 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:35.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.430 --rc genhtml_branch_coverage=1 00:08:35.430 --rc genhtml_function_coverage=1 00:08:35.430 --rc genhtml_legend=1 00:08:35.430 --rc geninfo_all_blocks=1 00:08:35.430 --rc geninfo_unexecuted_blocks=1 00:08:35.430 00:08:35.430 ' 00:08:35.430 07:05:32 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:35.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.430 --rc genhtml_branch_coverage=1 00:08:35.430 --rc genhtml_function_coverage=1 00:08:35.430 --rc genhtml_legend=1 00:08:35.430 --rc geninfo_all_blocks=1 00:08:35.430 --rc geninfo_unexecuted_blocks=1 00:08:35.430 00:08:35.430 ' 00:08:35.430 07:05:32 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:35.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.430 --rc genhtml_branch_coverage=1 00:08:35.430 --rc genhtml_function_coverage=1 00:08:35.430 --rc genhtml_legend=1 00:08:35.430 --rc geninfo_all_blocks=1 00:08:35.430 --rc geninfo_unexecuted_blocks=1 00:08:35.430 00:08:35.430 ' 00:08:35.430 07:05:32 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:35.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.430 --rc genhtml_branch_coverage=1 00:08:35.430 --rc genhtml_function_coverage=1 00:08:35.430 --rc genhtml_legend=1 00:08:35.430 --rc geninfo_all_blocks=1 00:08:35.430 --rc geninfo_unexecuted_blocks=1 00:08:35.430 00:08:35.430 ' 00:08:35.430 07:05:32 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:35.430 07:05:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:35.430 07:05:32 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:35.430 07:05:32 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:35.430 07:05:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.430 07:05:32 event -- common/autotest_common.sh@10 -- # set +x 00:08:35.430 ************************************ 00:08:35.430 START TEST event_perf 00:08:35.430 ************************************ 00:08:35.430 07:05:32 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:35.430 Running I/O for 1 seconds...[2024-11-20 07:05:32.654622] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:08:35.430 [2024-11-20 07:05:32.654906] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57947 ] 00:08:35.688 [2024-11-20 07:05:32.837293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.688 [2024-11-20 07:05:33.003276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.688 [2024-11-20 07:05:33.003498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.688 [2024-11-20 07:05:33.003581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.688 [2024-11-20 07:05:33.003591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.320 Running I/O for 1 seconds... 00:08:37.320 lcore 0: 193876 00:08:37.320 lcore 1: 193875 00:08:37.320 lcore 2: 193877 00:08:37.320 lcore 3: 193877 00:08:37.320 done. 00:08:37.320 ************************************ 00:08:37.320 END TEST event_perf 00:08:37.320 ************************************ 00:08:37.320 00:08:37.320 real 0m1.628s 00:08:37.320 user 0m4.384s 00:08:37.320 sys 0m0.116s 00:08:37.320 07:05:34 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.320 07:05:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:37.320 07:05:34 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:37.320 07:05:34 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:37.320 07:05:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.320 07:05:34 event -- common/autotest_common.sh@10 -- # set +x 00:08:37.320 ************************************ 00:08:37.320 START TEST event_reactor 00:08:37.320 ************************************ 00:08:37.320 07:05:34 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:37.320 [2024-11-20 07:05:34.332940] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:08:37.320 [2024-11-20 07:05:34.333108] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57987 ] 00:08:37.320 [2024-11-20 07:05:34.503015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.320 [2024-11-20 07:05:34.630121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.699 test_start 00:08:38.699 oneshot 00:08:38.699 tick 100 00:08:38.699 tick 100 00:08:38.699 tick 250 00:08:38.699 tick 100 00:08:38.699 tick 100 00:08:38.699 tick 250 00:08:38.699 tick 100 00:08:38.699 tick 500 00:08:38.699 tick 100 00:08:38.699 tick 100 00:08:38.699 tick 250 00:08:38.699 tick 100 00:08:38.699 tick 100 00:08:38.699 test_end 00:08:38.699 00:08:38.699 real 0m1.574s 00:08:38.699 user 0m1.377s 00:08:38.699 sys 0m0.088s 00:08:38.699 07:05:35 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.699 07:05:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:38.699 ************************************ 00:08:38.699 END TEST event_reactor 00:08:38.699 ************************************ 00:08:38.699 07:05:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:38.699 07:05:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:38.699 07:05:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.699 07:05:35 event -- common/autotest_common.sh@10 -- # set +x 00:08:38.699 ************************************ 00:08:38.699 START TEST event_reactor_perf 00:08:38.699 ************************************ 00:08:38.699 07:05:35 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:38.699 [2024-11-20 07:05:35.974840] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:08:38.699 [2024-11-20 07:05:35.975063] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58023 ] 00:08:38.958 [2024-11-20 07:05:36.162824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.216 [2024-11-20 07:05:36.322140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.589 test_start 00:08:40.589 test_end 00:08:40.589 Performance: 279067 events per second 00:08:40.589 00:08:40.589 real 0m1.633s 00:08:40.589 user 0m1.399s 00:08:40.589 sys 0m0.123s 00:08:40.589 07:05:37 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.589 07:05:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:40.589 ************************************ 00:08:40.589 END TEST event_reactor_perf 00:08:40.589 ************************************ 00:08:40.589 07:05:37 event -- event/event.sh@49 -- # uname -s 00:08:40.589 07:05:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:40.589 07:05:37 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:40.589 07:05:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.589 07:05:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.589 07:05:37 event -- common/autotest_common.sh@10 -- # set +x 00:08:40.589 ************************************ 00:08:40.589 START TEST event_scheduler 00:08:40.589 ************************************ 00:08:40.589 07:05:37 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:40.589 * Looking for test storage... 00:08:40.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:40.589 07:05:37 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:40.589 07:05:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:40.589 07:05:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:08:40.589 07:05:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.589 07:05:37 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:40.589 07:05:37 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.589 07:05:37 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:40.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.589 --rc genhtml_branch_coverage=1 00:08:40.589 --rc genhtml_function_coverage=1 00:08:40.589 --rc genhtml_legend=1 00:08:40.589 --rc geninfo_all_blocks=1 00:08:40.589 --rc geninfo_unexecuted_blocks=1 00:08:40.589 00:08:40.589 ' 00:08:40.589 07:05:37 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:40.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.589 --rc genhtml_branch_coverage=1 00:08:40.589 --rc genhtml_function_coverage=1 00:08:40.589 --rc genhtml_legend=1 00:08:40.589 --rc geninfo_all_blocks=1 00:08:40.590 --rc geninfo_unexecuted_blocks=1 00:08:40.590 00:08:40.590 ' 00:08:40.590 07:05:37 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:40.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.590 --rc genhtml_branch_coverage=1 00:08:40.590 --rc genhtml_function_coverage=1 00:08:40.590 --rc genhtml_legend=1 00:08:40.590 --rc geninfo_all_blocks=1 00:08:40.590 --rc geninfo_unexecuted_blocks=1 00:08:40.590 00:08:40.590 ' 00:08:40.590 07:05:37 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:40.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.590 --rc genhtml_branch_coverage=1 00:08:40.590 --rc genhtml_function_coverage=1 00:08:40.590 --rc genhtml_legend=1 00:08:40.590 --rc geninfo_all_blocks=1 00:08:40.590 --rc geninfo_unexecuted_blocks=1 00:08:40.590 00:08:40.590 ' 00:08:40.590 07:05:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:40.590 07:05:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58099 00:08:40.590 07:05:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:40.590 07:05:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:40.590 07:05:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58099 00:08:40.590 07:05:37 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58099 ']' 00:08:40.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.590 07:05:37 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.590 07:05:37 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.590 07:05:37 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.590 07:05:37 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.590 07:05:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:40.848 [2024-11-20 07:05:37.908961] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:08:40.848 [2024-11-20 07:05:37.909386] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58099 ] 00:08:40.848 [2024-11-20 07:05:38.096279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.106 [2024-11-20 07:05:38.259205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.106 [2024-11-20 07:05:38.259353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.106 [2024-11-20 07:05:38.259532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.106 [2024-11-20 07:05:38.259545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.673 07:05:38 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:41.673 07:05:38 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:41.673 07:05:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:41.673 07:05:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.673 07:05:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:41.673 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:41.673 POWER: Cannot set governor of lcore 0 to userspace 00:08:41.673 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:41.673 POWER: Cannot set governor of lcore 0 to performance 00:08:41.673 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:41.673 POWER: Cannot set governor of lcore 0 to userspace 00:08:41.673 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:41.673 POWER: Cannot set governor of lcore 0 to userspace 00:08:41.673 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:41.673 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:41.673 POWER: Unable to set Power Management Environment for lcore 0 00:08:41.673 [2024-11-20 07:05:38.911659] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:41.673 [2024-11-20 07:05:38.911818] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:41.673 [2024-11-20 07:05:38.911952] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:41.673 [2024-11-20 07:05:38.912081] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:41.673 [2024-11-20 07:05:38.912203] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:41.673 [2024-11-20 07:05:38.912327] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:41.673 07:05:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.673 07:05:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:41.673 07:05:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.673 07:05:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:41.932 [2024-11-20 07:05:39.247891] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:42.191 07:05:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.191 07:05:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:42.191 07:05:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.191 07:05:39 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.191 07:05:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:42.191 ************************************ 00:08:42.191 START TEST scheduler_create_thread 00:08:42.191 ************************************ 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.191 2 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.191 3 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.191 4 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.191 5 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.191 6 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.191 7 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:42.191 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.192 8 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.192 9 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.192 10 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.192 07:05:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:43.565 07:05:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.565 07:05:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:43.565 07:05:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:43.565 07:05:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.565 07:05:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.939 ************************************ 00:08:44.939 END TEST scheduler_create_thread 00:08:44.939 ************************************ 00:08:44.939 07:05:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.939 00:08:44.939 real 0m2.620s 00:08:44.939 user 0m0.020s 00:08:44.939 sys 0m0.005s 00:08:44.939 07:05:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.939 07:05:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:44.939 07:05:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:44.939 07:05:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58099 00:08:44.939 07:05:41 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58099 ']' 00:08:44.939 07:05:41 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58099 00:08:44.939 07:05:41 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:44.939 07:05:41 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.939 07:05:41 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58099 00:08:44.939 killing process with pid 58099 00:08:44.939 07:05:41 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:44.939 07:05:41 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:44.939 07:05:41 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58099' 00:08:44.939 07:05:41 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58099 00:08:44.939 07:05:41 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58099 00:08:45.197 [2024-11-20 07:05:42.358553] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:46.132 00:08:46.132 real 0m5.824s 00:08:46.132 user 0m10.167s 00:08:46.132 sys 0m0.542s 00:08:46.132 ************************************ 00:08:46.132 END TEST event_scheduler 00:08:46.132 ************************************ 00:08:46.132 07:05:43 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.132 07:05:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:46.390 07:05:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:46.390 07:05:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:46.390 07:05:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.390 07:05:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.390 07:05:43 event -- common/autotest_common.sh@10 -- # set +x 00:08:46.390 ************************************ 00:08:46.390 START TEST app_repeat 00:08:46.390 ************************************ 00:08:46.390 07:05:43 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:46.390 Process app_repeat pid: 58211 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58211 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58211' 00:08:46.390 spdk_app_start Round 0 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:46.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:46.390 07:05:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58211 /var/tmp/spdk-nbd.sock 00:08:46.390 07:05:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58211 ']' 00:08:46.390 07:05:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:46.390 07:05:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.390 07:05:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:46.390 07:05:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.390 07:05:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:46.390 [2024-11-20 07:05:43.539151] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:08:46.390 [2024-11-20 07:05:43.539314] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58211 ] 00:08:46.648 [2024-11-20 07:05:43.716596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:46.648 [2024-11-20 07:05:43.853450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.648 [2024-11-20 07:05:43.853462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.583 07:05:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.583 07:05:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:47.583 07:05:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:47.583 Malloc0 00:08:47.583 07:05:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:48.150 Malloc1 00:08:48.150 07:05:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:48.150 07:05:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:48.408 /dev/nbd0 00:08:48.408 07:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:48.408 07:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:48.408 07:05:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:48.408 07:05:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:48.408 07:05:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:48.408 07:05:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:48.408 07:05:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:48.408 07:05:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:48.408 07:05:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:48.408 07:05:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:48.409 07:05:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:48.409 1+0 records in 00:08:48.409 1+0 records out 00:08:48.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310756 s, 13.2 MB/s 00:08:48.409 07:05:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:48.409 07:05:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:48.409 07:05:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:48.409 07:05:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:48.409 07:05:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:48.409 07:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.409 07:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:48.409 07:05:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:48.667 /dev/nbd1 00:08:48.667 07:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:48.667 07:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:48.667 1+0 records in 00:08:48.667 1+0 records out 00:08:48.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412449 s, 9.9 MB/s 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:48.667 07:05:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:48.667 07:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:48.668 07:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:48.668 07:05:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:48.668 07:05:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.668 07:05:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.925 07:05:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:48.925 { 00:08:48.925 "nbd_device": "/dev/nbd0", 00:08:48.925 "bdev_name": "Malloc0" 00:08:48.925 }, 00:08:48.925 { 00:08:48.925 "nbd_device": "/dev/nbd1", 00:08:48.925 "bdev_name": "Malloc1" 00:08:48.925 } 00:08:48.925 ]' 00:08:48.925 07:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:48.925 { 00:08:48.925 "nbd_device": "/dev/nbd0", 00:08:48.925 "bdev_name": "Malloc0" 00:08:48.925 }, 00:08:48.925 { 00:08:48.925 "nbd_device": "/dev/nbd1", 00:08:48.925 "bdev_name": "Malloc1" 00:08:48.925 } 00:08:48.925 ]' 00:08:48.925 07:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.184 07:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:49.184 /dev/nbd1' 00:08:49.184 07:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:49.185 /dev/nbd1' 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:49.185 256+0 records in 00:08:49.185 256+0 records out 00:08:49.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0095372 s, 110 MB/s 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:49.185 256+0 records in 00:08:49.185 256+0 records out 00:08:49.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297118 s, 35.3 MB/s 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:49.185 256+0 records in 00:08:49.185 256+0 records out 00:08:49.185 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0337798 s, 31.0 MB/s 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.185 07:05:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:49.443 07:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:49.443 07:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:49.443 07:05:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:49.443 07:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.443 07:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.443 07:05:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:49.443 07:05:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:49.443 07:05:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.443 07:05:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:49.443 07:05:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:49.701 07:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:49.701 07:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:49.701 07:05:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:49.701 07:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:49.701 07:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:49.702 07:05:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:49.702 07:05:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:49.702 07:05:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:49.702 07:05:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:49.702 07:05:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.702 07:05:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.960 07:05:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:49.960 07:05:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:49.960 07:05:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:50.218 07:05:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:50.218 07:05:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:50.218 07:05:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:50.218 07:05:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:50.218 07:05:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:50.218 07:05:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:50.218 07:05:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:50.218 07:05:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:50.218 07:05:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:50.218 07:05:47 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:50.783 07:05:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:51.733 [2024-11-20 07:05:48.847432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:51.733 [2024-11-20 07:05:49.012549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.733 [2024-11-20 07:05:49.012579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.991 [2024-11-20 07:05:49.242362] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:51.991 [2024-11-20 07:05:49.242460] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:53.889 spdk_app_start Round 1 00:08:53.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:53.889 07:05:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:53.889 07:05:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:53.889 07:05:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58211 /var/tmp/spdk-nbd.sock 00:08:53.889 07:05:50 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58211 ']' 00:08:53.889 07:05:50 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:53.889 07:05:50 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.889 07:05:50 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:53.889 07:05:50 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.889 07:05:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:53.889 07:05:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.889 07:05:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:53.889 07:05:51 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:54.147 Malloc0 00:08:54.147 07:05:51 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:54.714 Malloc1 00:08:54.714 07:05:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:54.714 07:05:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:54.972 /dev/nbd0 00:08:54.972 07:05:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:54.972 07:05:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:54.972 07:05:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:54.972 07:05:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:54.972 07:05:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:54.972 07:05:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:54.972 07:05:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:54.972 07:05:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:54.972 07:05:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:54.972 07:05:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:54.973 07:05:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:54.973 1+0 records in 00:08:54.973 1+0 records out 00:08:54.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610087 s, 6.7 MB/s 00:08:54.973 07:05:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:54.973 07:05:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:54.973 07:05:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:54.973 07:05:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:54.973 07:05:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:54.973 07:05:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.973 07:05:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:54.973 07:05:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:55.231 /dev/nbd1 00:08:55.231 07:05:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:55.231 07:05:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:55.231 1+0 records in 00:08:55.231 1+0 records out 00:08:55.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038542 s, 10.6 MB/s 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:55.231 07:05:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:55.231 07:05:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:55.231 07:05:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:55.231 07:05:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:55.231 07:05:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.231 07:05:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:55.489 07:05:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:55.489 { 00:08:55.489 "nbd_device": "/dev/nbd0", 00:08:55.489 "bdev_name": "Malloc0" 00:08:55.489 }, 00:08:55.489 { 00:08:55.489 "nbd_device": "/dev/nbd1", 00:08:55.489 "bdev_name": "Malloc1" 00:08:55.489 } 00:08:55.489 ]' 00:08:55.489 07:05:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:55.489 { 00:08:55.489 "nbd_device": "/dev/nbd0", 00:08:55.489 "bdev_name": "Malloc0" 00:08:55.489 }, 00:08:55.489 { 00:08:55.489 "nbd_device": "/dev/nbd1", 00:08:55.489 "bdev_name": "Malloc1" 00:08:55.489 } 00:08:55.489 ]' 00:08:55.489 07:05:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:55.489 07:05:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:55.489 /dev/nbd1' 00:08:55.489 07:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:55.489 /dev/nbd1' 00:08:55.489 07:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:55.490 07:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:55.490 07:05:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:55.490 07:05:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:55.490 07:05:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:55.490 07:05:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:55.490 07:05:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:55.490 07:05:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:55.490 07:05:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:55.490 07:05:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:55.490 07:05:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:55.490 07:05:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:55.490 256+0 records in 00:08:55.490 256+0 records out 00:08:55.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00623529 s, 168 MB/s 00:08:55.490 07:05:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:55.490 07:05:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:55.749 256+0 records in 00:08:55.749 256+0 records out 00:08:55.749 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0315335 s, 33.3 MB/s 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:55.749 256+0 records in 00:08:55.749 256+0 records out 00:08:55.749 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0331267 s, 31.7 MB/s 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.749 07:05:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:56.008 07:05:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:56.008 07:05:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:56.008 07:05:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:56.008 07:05:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.008 07:05:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.008 07:05:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:56.008 07:05:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:56.008 07:05:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.008 07:05:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.008 07:05:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:56.266 07:05:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:56.266 07:05:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:56.266 07:05:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:56.266 07:05:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.266 07:05:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.266 07:05:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:56.266 07:05:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:56.266 07:05:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.266 07:05:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:56.266 07:05:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.266 07:05:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:56.523 07:05:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:56.523 07:05:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:56.523 07:05:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:56.781 07:05:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:56.781 07:05:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:56.781 07:05:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:56.781 07:05:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:56.781 07:05:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:56.781 07:05:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:56.781 07:05:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:56.781 07:05:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:56.781 07:05:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:56.781 07:05:53 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:57.038 07:05:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:58.412 [2024-11-20 07:05:55.398523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:58.412 [2024-11-20 07:05:55.588934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.412 [2024-11-20 07:05:55.588935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.671 [2024-11-20 07:05:55.779535] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:58.671 [2024-11-20 07:05:55.779650] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:00.076 spdk_app_start Round 2 00:09:00.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:00.076 07:05:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:00.076 07:05:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:00.076 07:05:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58211 /var/tmp/spdk-nbd.sock 00:09:00.076 07:05:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58211 ']' 00:09:00.076 07:05:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:00.076 07:05:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.076 07:05:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:00.076 07:05:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.076 07:05:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:00.641 07:05:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.641 07:05:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:00.641 07:05:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:00.899 Malloc0 00:09:00.899 07:05:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:01.158 Malloc1 00:09:01.158 07:05:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:01.158 07:05:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:01.418 /dev/nbd0 00:09:01.418 07:05:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:01.418 07:05:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:01.418 07:05:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:01.418 07:05:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:01.418 07:05:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.418 07:05:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.418 07:05:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:01.418 07:05:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:01.418 07:05:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.418 07:05:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.418 07:05:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:01.418 1+0 records in 00:09:01.418 1+0 records out 00:09:01.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387452 s, 10.6 MB/s 00:09:01.418 07:05:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:01.677 07:05:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:01.677 07:05:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:01.677 07:05:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.677 07:05:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:01.677 07:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.677 07:05:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:01.677 07:05:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:01.936 /dev/nbd1 00:09:01.936 07:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:01.936 07:05:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:01.936 07:05:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:01.936 07:05:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:01.936 07:05:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:01.936 07:05:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:01.936 07:05:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:01.936 07:05:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:01.936 07:05:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:01.936 07:05:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:01.936 07:05:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:01.936 1+0 records in 00:09:01.936 1+0 records out 00:09:01.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361892 s, 11.3 MB/s 00:09:01.936 07:05:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:01.937 07:05:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:01.937 07:05:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:01.937 07:05:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:01.937 07:05:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:01.937 07:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:01.937 07:05:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:01.937 07:05:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:01.937 07:05:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.937 07:05:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:02.503 { 00:09:02.503 "nbd_device": "/dev/nbd0", 00:09:02.503 "bdev_name": "Malloc0" 00:09:02.503 }, 00:09:02.503 { 00:09:02.503 "nbd_device": "/dev/nbd1", 00:09:02.503 "bdev_name": "Malloc1" 00:09:02.503 } 00:09:02.503 ]' 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:02.503 { 00:09:02.503 "nbd_device": "/dev/nbd0", 00:09:02.503 "bdev_name": "Malloc0" 00:09:02.503 }, 00:09:02.503 { 00:09:02.503 "nbd_device": "/dev/nbd1", 00:09:02.503 "bdev_name": "Malloc1" 00:09:02.503 } 00:09:02.503 ]' 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:02.503 /dev/nbd1' 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:02.503 /dev/nbd1' 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:02.503 256+0 records in 00:09:02.503 256+0 records out 00:09:02.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00671662 s, 156 MB/s 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:02.503 256+0 records in 00:09:02.503 256+0 records out 00:09:02.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304739 s, 34.4 MB/s 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:02.503 256+0 records in 00:09:02.503 256+0 records out 00:09:02.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0347856 s, 30.1 MB/s 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.503 07:05:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.120 07:06:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:03.686 07:06:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:03.686 07:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:03.686 07:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:03.686 07:06:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:03.686 07:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:03.686 07:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:03.686 07:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:03.686 07:06:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:03.686 07:06:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:03.686 07:06:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:03.686 07:06:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:03.686 07:06:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:03.686 07:06:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:04.250 07:06:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:05.183 [2024-11-20 07:06:02.387113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:05.441 [2024-11-20 07:06:02.516527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.441 [2024-11-20 07:06:02.516537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.441 [2024-11-20 07:06:02.707625] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:05.441 [2024-11-20 07:06:02.707727] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:07.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:07.337 07:06:04 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58211 /var/tmp/spdk-nbd.sock 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58211 ']' 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:07.337 07:06:04 event.app_repeat -- event/event.sh@39 -- # killprocess 58211 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58211 ']' 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58211 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58211 00:09:07.337 killing process with pid 58211 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58211' 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58211 00:09:07.337 07:06:04 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58211 00:09:08.270 spdk_app_start is called in Round 0. 00:09:08.270 Shutdown signal received, stop current app iteration 00:09:08.270 Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 reinitialization... 00:09:08.270 spdk_app_start is called in Round 1. 00:09:08.270 Shutdown signal received, stop current app iteration 00:09:08.270 Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 reinitialization... 00:09:08.270 spdk_app_start is called in Round 2. 00:09:08.270 Shutdown signal received, stop current app iteration 00:09:08.270 Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 reinitialization... 00:09:08.270 spdk_app_start is called in Round 3. 00:09:08.270 Shutdown signal received, stop current app iteration 00:09:08.270 ************************************ 00:09:08.270 END TEST app_repeat 00:09:08.270 ************************************ 00:09:08.270 07:06:05 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:08.270 07:06:05 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:08.270 00:09:08.270 real 0m22.099s 00:09:08.270 user 0m49.013s 00:09:08.270 sys 0m3.203s 00:09:08.270 07:06:05 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.270 07:06:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 07:06:05 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:08.528 07:06:05 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:08.528 07:06:05 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.528 07:06:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.528 07:06:05 event -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 ************************************ 00:09:08.528 START TEST cpu_locks 00:09:08.528 ************************************ 00:09:08.528 07:06:05 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:09:08.528 * Looking for test storage... 00:09:08.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:08.528 07:06:05 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:08.528 07:06:05 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:09:08.528 07:06:05 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:08.528 07:06:05 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.528 07:06:05 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:08.528 07:06:05 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.528 07:06:05 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:08.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.528 --rc genhtml_branch_coverage=1 00:09:08.528 --rc genhtml_function_coverage=1 00:09:08.528 --rc genhtml_legend=1 00:09:08.528 --rc geninfo_all_blocks=1 00:09:08.528 --rc geninfo_unexecuted_blocks=1 00:09:08.528 00:09:08.528 ' 00:09:08.528 07:06:05 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:08.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.528 --rc genhtml_branch_coverage=1 00:09:08.528 --rc genhtml_function_coverage=1 00:09:08.528 --rc genhtml_legend=1 00:09:08.528 --rc geninfo_all_blocks=1 00:09:08.528 --rc geninfo_unexecuted_blocks=1 00:09:08.528 00:09:08.528 ' 00:09:08.528 07:06:05 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:08.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.528 --rc genhtml_branch_coverage=1 00:09:08.528 --rc genhtml_function_coverage=1 00:09:08.528 --rc genhtml_legend=1 00:09:08.528 --rc geninfo_all_blocks=1 00:09:08.528 --rc geninfo_unexecuted_blocks=1 00:09:08.528 00:09:08.528 ' 00:09:08.528 07:06:05 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:08.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.528 --rc genhtml_branch_coverage=1 00:09:08.528 --rc genhtml_function_coverage=1 00:09:08.528 --rc genhtml_legend=1 00:09:08.528 --rc geninfo_all_blocks=1 00:09:08.528 --rc geninfo_unexecuted_blocks=1 00:09:08.528 00:09:08.528 ' 00:09:08.528 07:06:05 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:08.528 07:06:05 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:08.528 07:06:05 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:08.528 07:06:05 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:08.528 07:06:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:08.528 07:06:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.528 07:06:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.528 ************************************ 00:09:08.528 START TEST default_locks 00:09:08.528 ************************************ 00:09:08.528 07:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:08.528 07:06:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58691 00:09:08.528 07:06:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:08.528 07:06:05 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58691 00:09:08.528 07:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58691 ']' 00:09:08.528 07:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.528 07:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.528 07:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.528 07:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.528 07:06:05 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:08.786 [2024-11-20 07:06:05.911208] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:09:08.786 [2024-11-20 07:06:05.911380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58691 ] 00:09:08.786 [2024-11-20 07:06:06.086604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.043 [2024-11-20 07:06:06.217581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.976 07:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.976 07:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:09.976 07:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58691 00:09:09.976 07:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58691 00:09:09.976 07:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:10.236 07:06:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58691 00:09:10.236 07:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58691 ']' 00:09:10.236 07:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58691 00:09:10.236 07:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:10.236 07:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.236 07:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58691 00:09:10.236 07:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.236 killing process with pid 58691 00:09:10.236 07:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.236 07:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58691' 00:09:10.236 07:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58691 00:09:10.236 07:06:07 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58691 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58691 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58691 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58691 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58691 ']' 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:12.789 ERROR: process (pid: 58691) is no longer running 00:09:12.789 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58691) - No such process 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:12.789 00:09:12.789 real 0m3.986s 00:09:12.789 user 0m4.015s 00:09:12.789 sys 0m0.724s 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:12.789 ************************************ 00:09:12.789 END TEST default_locks 00:09:12.789 07:06:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:12.789 ************************************ 00:09:12.790 07:06:09 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:12.790 07:06:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:12.790 07:06:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:12.790 07:06:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:12.790 ************************************ 00:09:12.790 START TEST default_locks_via_rpc 00:09:12.790 ************************************ 00:09:12.790 07:06:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:12.790 07:06:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58761 00:09:12.790 07:06:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58761 00:09:12.790 07:06:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58761 ']' 00:09:12.790 07:06:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:12.790 07:06:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.790 07:06:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:12.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.790 07:06:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.790 07:06:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:12.790 07:06:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.790 [2024-11-20 07:06:09.949635] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:09:12.790 [2024-11-20 07:06:09.949841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58761 ] 00:09:13.048 [2024-11-20 07:06:10.126073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.048 [2024-11-20 07:06:10.281092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.983 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.983 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:13.983 07:06:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:13.983 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.983 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.983 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.983 07:06:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:13.983 07:06:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:13.983 07:06:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:13.983 07:06:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:13.983 07:06:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:13.984 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.984 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:13.984 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.984 07:06:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58761 00:09:13.984 07:06:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:13.984 07:06:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58761 00:09:14.242 07:06:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58761 00:09:14.242 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58761 ']' 00:09:14.242 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58761 00:09:14.242 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:14.242 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.242 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58761 00:09:14.242 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.242 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.242 killing process with pid 58761 00:09:14.242 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58761' 00:09:14.242 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58761 00:09:14.242 07:06:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58761 00:09:16.772 00:09:16.772 real 0m3.898s 00:09:16.772 user 0m3.936s 00:09:16.772 sys 0m0.665s 00:09:16.772 07:06:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.772 07:06:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.772 ************************************ 00:09:16.772 END TEST default_locks_via_rpc 00:09:16.772 ************************************ 00:09:16.772 07:06:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:16.772 07:06:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.772 07:06:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.772 07:06:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:16.772 ************************************ 00:09:16.772 START TEST non_locking_app_on_locked_coremask 00:09:16.772 ************************************ 00:09:16.772 07:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:16.772 07:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58835 00:09:16.772 07:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:16.772 07:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58835 /var/tmp/spdk.sock 00:09:16.772 07:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58835 ']' 00:09:16.772 07:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.772 07:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.772 07:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.772 07:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.772 07:06:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:16.772 [2024-11-20 07:06:13.895665] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:09:16.772 [2024-11-20 07:06:13.895830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58835 ] 00:09:16.772 [2024-11-20 07:06:14.066177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.030 [2024-11-20 07:06:14.197085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.965 07:06:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.965 07:06:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:17.965 07:06:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:17.965 07:06:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58851 00:09:17.966 07:06:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58851 /var/tmp/spdk2.sock 00:09:17.966 07:06:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58851 ']' 00:09:17.966 07:06:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:17.966 07:06:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:17.966 07:06:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:17.966 07:06:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.966 07:06:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:17.966 [2024-11-20 07:06:15.177628] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:09:17.966 [2024-11-20 07:06:15.177849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58851 ] 00:09:18.223 [2024-11-20 07:06:15.377023] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:18.223 [2024-11-20 07:06:15.377118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.481 [2024-11-20 07:06:15.647513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.010 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.010 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:21.010 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58835 00:09:21.010 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58835 00:09:21.010 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:21.576 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58835 00:09:21.576 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58835 ']' 00:09:21.576 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58835 00:09:21.576 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:21.576 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.576 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58835 00:09:21.576 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.576 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.576 killing process with pid 58835 00:09:21.576 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58835' 00:09:21.576 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58835 00:09:21.576 07:06:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58835 00:09:26.881 07:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58851 00:09:26.881 07:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58851 ']' 00:09:26.881 07:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58851 00:09:26.881 07:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:26.881 07:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.881 07:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58851 00:09:26.881 07:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.881 killing process with pid 58851 00:09:26.881 07:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.881 07:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58851' 00:09:26.881 07:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58851 00:09:26.881 07:06:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58851 00:09:28.258 00:09:28.258 real 0m11.684s 00:09:28.258 user 0m12.285s 00:09:28.258 sys 0m1.397s 00:09:28.258 07:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.258 07:06:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:28.258 ************************************ 00:09:28.258 END TEST non_locking_app_on_locked_coremask 00:09:28.258 ************************************ 00:09:28.258 07:06:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:28.258 07:06:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.258 07:06:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.258 07:06:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:28.258 ************************************ 00:09:28.258 START TEST locking_app_on_unlocked_coremask 00:09:28.258 ************************************ 00:09:28.258 07:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:28.258 07:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59004 00:09:28.258 07:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59004 /var/tmp/spdk.sock 00:09:28.258 07:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:28.258 07:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59004 ']' 00:09:28.258 07:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.259 07:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.259 07:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.259 07:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.259 07:06:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:28.521 [2024-11-20 07:06:25.633113] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:09:28.521 [2024-11-20 07:06:25.633259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59004 ] 00:09:28.521 [2024-11-20 07:06:25.806025] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:28.521 [2024-11-20 07:06:25.806093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.818 [2024-11-20 07:06:25.936467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.752 07:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.752 07:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:29.752 07:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59025 00:09:29.752 07:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59025 /var/tmp/spdk2.sock 00:09:29.752 07:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59025 ']' 00:09:29.752 07:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:29.752 07:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:29.752 07:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:29.752 07:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.752 07:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:29.752 07:06:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:29.752 [2024-11-20 07:06:26.927823] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:09:29.752 [2024-11-20 07:06:26.928026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59025 ] 00:09:30.010 [2024-11-20 07:06:27.130530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.269 [2024-11-20 07:06:27.397244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.801 07:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.801 07:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:32.801 07:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59025 00:09:32.801 07:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59025 00:09:32.801 07:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:33.370 07:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59004 00:09:33.370 07:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59004 ']' 00:09:33.370 07:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59004 00:09:33.370 07:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:33.370 07:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.370 07:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59004 00:09:33.370 07:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.370 07:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.370 killing process with pid 59004 00:09:33.370 07:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59004' 00:09:33.370 07:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59004 00:09:33.370 07:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59004 00:09:38.661 07:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59025 00:09:38.661 07:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59025 ']' 00:09:38.661 07:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59025 00:09:38.661 07:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:38.661 07:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.661 07:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59025 00:09:38.661 07:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.661 07:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.661 07:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59025' 00:09:38.661 killing process with pid 59025 00:09:38.661 07:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59025 00:09:38.661 07:06:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59025 00:09:40.038 00:09:40.038 real 0m11.797s 00:09:40.038 user 0m12.474s 00:09:40.038 sys 0m1.524s 00:09:40.038 07:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.038 07:06:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:40.038 ************************************ 00:09:40.038 END TEST locking_app_on_unlocked_coremask 00:09:40.038 ************************************ 00:09:40.297 07:06:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:40.297 07:06:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.297 07:06:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.297 07:06:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:40.297 ************************************ 00:09:40.297 START TEST locking_app_on_locked_coremask 00:09:40.297 ************************************ 00:09:40.297 07:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:40.297 07:06:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59181 00:09:40.297 07:06:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:40.297 07:06:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59181 /var/tmp/spdk.sock 00:09:40.297 07:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59181 ']' 00:09:40.297 07:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.297 07:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.297 07:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.297 07:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.297 07:06:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:40.297 [2024-11-20 07:06:37.493449] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:09:40.297 [2024-11-20 07:06:37.494545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59181 ] 00:09:40.556 [2024-11-20 07:06:37.690833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.556 [2024-11-20 07:06:37.847318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59197 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59197 /var/tmp/spdk2.sock 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59197 /var/tmp/spdk2.sock 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59197 /var/tmp/spdk2.sock 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59197 ']' 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.492 07:06:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:41.771 [2024-11-20 07:06:38.857613] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:09:41.771 [2024-11-20 07:06:38.857791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59197 ] 00:09:41.771 [2024-11-20 07:06:39.061567] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59181 has claimed it. 00:09:41.771 [2024-11-20 07:06:39.061689] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:42.335 ERROR: process (pid: 59197) is no longer running 00:09:42.335 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59197) - No such process 00:09:42.335 07:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.335 07:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:42.335 07:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:42.335 07:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:42.335 07:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:42.335 07:06:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:42.335 07:06:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59181 00:09:42.335 07:06:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59181 00:09:42.335 07:06:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:42.901 07:06:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59181 00:09:42.901 07:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59181 ']' 00:09:42.901 07:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59181 00:09:42.901 07:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:42.901 07:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.901 07:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59181 00:09:42.901 07:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.901 07:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.901 killing process with pid 59181 00:09:42.901 07:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59181' 00:09:42.901 07:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59181 00:09:42.901 07:06:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59181 00:09:45.452 00:09:45.452 real 0m4.929s 00:09:45.452 user 0m5.271s 00:09:45.452 sys 0m0.942s 00:09:45.452 07:06:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.452 07:06:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:45.452 ************************************ 00:09:45.452 END TEST locking_app_on_locked_coremask 00:09:45.452 ************************************ 00:09:45.452 07:06:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:45.452 07:06:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.452 07:06:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.452 07:06:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:45.452 ************************************ 00:09:45.452 START TEST locking_overlapped_coremask 00:09:45.452 ************************************ 00:09:45.452 07:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:45.452 07:06:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59267 00:09:45.452 07:06:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59267 /var/tmp/spdk.sock 00:09:45.452 07:06:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:45.452 07:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59267 ']' 00:09:45.452 07:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.452 07:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.452 07:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.452 07:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.452 07:06:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:45.452 [2024-11-20 07:06:42.492118] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:09:45.452 [2024-11-20 07:06:42.492342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59267 ] 00:09:45.452 [2024-11-20 07:06:42.689214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:45.711 [2024-11-20 07:06:42.831523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.711 [2024-11-20 07:06:42.831677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.711 [2024-11-20 07:06:42.831695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.649 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.649 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:46.649 07:06:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59290 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59290 /var/tmp/spdk2.sock 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59290 /var/tmp/spdk2.sock 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59290 /var/tmp/spdk2.sock 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59290 ']' 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.650 07:06:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:46.650 [2024-11-20 07:06:43.891535] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:09:46.650 [2024-11-20 07:06:43.891708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59290 ] 00:09:46.909 [2024-11-20 07:06:44.098632] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59267 has claimed it. 00:09:46.909 [2024-11-20 07:06:44.098739] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:47.479 ERROR: process (pid: 59290) is no longer running 00:09:47.479 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59290) - No such process 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59267 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59267 ']' 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59267 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59267 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59267' 00:09:47.479 killing process with pid 59267 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59267 00:09:47.479 07:06:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59267 00:09:50.016 00:09:50.016 real 0m4.541s 00:09:50.016 user 0m12.351s 00:09:50.016 sys 0m0.750s 00:09:50.016 07:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.016 07:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:50.016 ************************************ 00:09:50.016 END TEST locking_overlapped_coremask 00:09:50.016 ************************************ 00:09:50.016 07:06:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:50.016 07:06:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.016 07:06:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.016 07:06:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:50.016 ************************************ 00:09:50.016 START TEST locking_overlapped_coremask_via_rpc 00:09:50.016 ************************************ 00:09:50.016 07:06:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:50.016 07:06:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59354 00:09:50.016 07:06:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:50.016 07:06:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59354 /var/tmp/spdk.sock 00:09:50.016 07:06:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59354 ']' 00:09:50.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.016 07:06:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.016 07:06:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.016 07:06:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.016 07:06:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.016 07:06:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.016 [2024-11-20 07:06:47.058402] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:09:50.016 [2024-11-20 07:06:47.058808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59354 ] 00:09:50.016 [2024-11-20 07:06:47.239402] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:50.016 [2024-11-20 07:06:47.239679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:50.274 [2024-11-20 07:06:47.389036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.274 [2024-11-20 07:06:47.389204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.274 [2024-11-20 07:06:47.389219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:51.212 07:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.212 07:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:51.212 07:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59372 00:09:51.212 07:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:51.212 07:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59372 /var/tmp/spdk2.sock 00:09:51.212 07:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59372 ']' 00:09:51.212 07:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:51.212 07:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.212 07:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:51.212 07:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.212 07:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.212 [2024-11-20 07:06:48.381334] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:09:51.212 [2024-11-20 07:06:48.381742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59372 ] 00:09:51.471 [2024-11-20 07:06:48.580583] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:51.471 [2024-11-20 07:06:48.580670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:51.730 [2024-11-20 07:06:48.872165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.730 [2024-11-20 07:06:48.876048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.730 [2024-11-20 07:06:48.876069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.265 [2024-11-20 07:06:51.147330] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59354 has claimed it. 00:09:54.265 request: 00:09:54.265 { 00:09:54.265 "method": "framework_enable_cpumask_locks", 00:09:54.265 "req_id": 1 00:09:54.265 } 00:09:54.265 Got JSON-RPC error response 00:09:54.265 response: 00:09:54.265 { 00:09:54.265 "code": -32603, 00:09:54.265 "message": "Failed to claim CPU core: 2" 00:09:54.265 } 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59354 /var/tmp/spdk.sock 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59354 ']' 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59372 /var/tmp/spdk2.sock 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59372 ']' 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:54.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:54.265 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.524 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.524 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:54.524 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:54.524 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:54.524 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:54.524 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:54.524 00:09:54.524 real 0m4.809s 00:09:54.524 user 0m1.767s 00:09:54.524 sys 0m0.227s 00:09:54.524 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.524 07:06:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:54.524 ************************************ 00:09:54.524 END TEST locking_overlapped_coremask_via_rpc 00:09:54.524 ************************************ 00:09:54.524 07:06:51 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:54.524 07:06:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59354 ]] 00:09:54.524 07:06:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59354 00:09:54.524 07:06:51 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59354 ']' 00:09:54.524 07:06:51 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59354 00:09:54.524 07:06:51 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:54.524 07:06:51 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.524 07:06:51 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59354 00:09:54.524 killing process with pid 59354 00:09:54.524 07:06:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.524 07:06:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.524 07:06:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59354' 00:09:54.524 07:06:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59354 00:09:54.524 07:06:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59354 00:09:57.052 07:06:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59372 ]] 00:09:57.052 07:06:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59372 00:09:57.052 07:06:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59372 ']' 00:09:57.052 07:06:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59372 00:09:57.052 07:06:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:57.052 07:06:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:57.052 07:06:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59372 00:09:57.052 killing process with pid 59372 00:09:57.052 07:06:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:57.052 07:06:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:57.052 07:06:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59372' 00:09:57.052 07:06:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59372 00:09:57.052 07:06:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59372 00:09:59.588 07:06:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:59.588 Process with pid 59354 is not found 00:09:59.588 07:06:56 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:59.588 07:06:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59354 ]] 00:09:59.588 07:06:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59354 00:09:59.588 07:06:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59354 ']' 00:09:59.588 07:06:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59354 00:09:59.588 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59354) - No such process 00:09:59.588 07:06:56 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59354 is not found' 00:09:59.588 07:06:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59372 ]] 00:09:59.588 07:06:56 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59372 00:09:59.588 07:06:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59372 ']' 00:09:59.588 Process with pid 59372 is not found 00:09:59.588 07:06:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59372 00:09:59.588 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59372) - No such process 00:09:59.588 07:06:56 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59372 is not found' 00:09:59.588 07:06:56 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:59.588 00:09:59.588 real 0m50.877s 00:09:59.588 user 1m28.547s 00:09:59.588 sys 0m7.493s 00:09:59.588 ************************************ 00:09:59.588 END TEST cpu_locks 00:09:59.588 ************************************ 00:09:59.588 07:06:56 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.588 07:06:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:59.588 ************************************ 00:09:59.588 END TEST event 00:09:59.588 ************************************ 00:09:59.588 00:09:59.588 real 1m24.132s 00:09:59.588 user 2m35.093s 00:09:59.588 sys 0m11.838s 00:09:59.588 07:06:56 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.588 07:06:56 event -- common/autotest_common.sh@10 -- # set +x 00:09:59.588 07:06:56 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:59.588 07:06:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.588 07:06:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.588 07:06:56 -- common/autotest_common.sh@10 -- # set +x 00:09:59.588 ************************************ 00:09:59.588 START TEST thread 00:09:59.588 ************************************ 00:09:59.588 07:06:56 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:59.588 * Looking for test storage... 00:09:59.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:59.588 07:06:56 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.588 07:06:56 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.588 07:06:56 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.588 07:06:56 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.588 07:06:56 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.588 07:06:56 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.588 07:06:56 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.588 07:06:56 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.588 07:06:56 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.588 07:06:56 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.588 07:06:56 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.588 07:06:56 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.588 07:06:56 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.588 07:06:56 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.588 07:06:56 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.588 07:06:56 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:59.588 07:06:56 thread -- scripts/common.sh@345 -- # : 1 00:09:59.589 07:06:56 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.589 07:06:56 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.589 07:06:56 thread -- scripts/common.sh@365 -- # decimal 1 00:09:59.589 07:06:56 thread -- scripts/common.sh@353 -- # local d=1 00:09:59.589 07:06:56 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.589 07:06:56 thread -- scripts/common.sh@355 -- # echo 1 00:09:59.589 07:06:56 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.589 07:06:56 thread -- scripts/common.sh@366 -- # decimal 2 00:09:59.589 07:06:56 thread -- scripts/common.sh@353 -- # local d=2 00:09:59.589 07:06:56 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.589 07:06:56 thread -- scripts/common.sh@355 -- # echo 2 00:09:59.589 07:06:56 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.589 07:06:56 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.589 07:06:56 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.589 07:06:56 thread -- scripts/common.sh@368 -- # return 0 00:09:59.589 07:06:56 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.589 07:06:56 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.589 --rc genhtml_branch_coverage=1 00:09:59.589 --rc genhtml_function_coverage=1 00:09:59.589 --rc genhtml_legend=1 00:09:59.589 --rc geninfo_all_blocks=1 00:09:59.589 --rc geninfo_unexecuted_blocks=1 00:09:59.589 00:09:59.589 ' 00:09:59.589 07:06:56 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.589 --rc genhtml_branch_coverage=1 00:09:59.589 --rc genhtml_function_coverage=1 00:09:59.589 --rc genhtml_legend=1 00:09:59.589 --rc geninfo_all_blocks=1 00:09:59.589 --rc geninfo_unexecuted_blocks=1 00:09:59.589 00:09:59.589 ' 00:09:59.589 07:06:56 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.589 --rc genhtml_branch_coverage=1 00:09:59.589 --rc genhtml_function_coverage=1 00:09:59.589 --rc genhtml_legend=1 00:09:59.589 --rc geninfo_all_blocks=1 00:09:59.589 --rc geninfo_unexecuted_blocks=1 00:09:59.589 00:09:59.589 ' 00:09:59.589 07:06:56 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.589 --rc genhtml_branch_coverage=1 00:09:59.589 --rc genhtml_function_coverage=1 00:09:59.589 --rc genhtml_legend=1 00:09:59.589 --rc geninfo_all_blocks=1 00:09:59.589 --rc geninfo_unexecuted_blocks=1 00:09:59.589 00:09:59.589 ' 00:09:59.589 07:06:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:59.589 07:06:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:59.589 07:06:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.589 07:06:56 thread -- common/autotest_common.sh@10 -- # set +x 00:09:59.589 ************************************ 00:09:59.589 START TEST thread_poller_perf 00:09:59.589 ************************************ 00:09:59.589 07:06:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:59.589 [2024-11-20 07:06:56.840109] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:09:59.589 [2024-11-20 07:06:56.840585] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59573 ] 00:09:59.848 [2024-11-20 07:06:57.038473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.106 [2024-11-20 07:06:57.193817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.106 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:01.483 [2024-11-20T07:06:58.803Z] ====================================== 00:10:01.483 [2024-11-20T07:06:58.803Z] busy:2208727172 (cyc) 00:10:01.483 [2024-11-20T07:06:58.803Z] total_run_count: 298000 00:10:01.483 [2024-11-20T07:06:58.803Z] tsc_hz: 2200000000 (cyc) 00:10:01.483 [2024-11-20T07:06:58.803Z] ====================================== 00:10:01.483 [2024-11-20T07:06:58.803Z] poller_cost: 7411 (cyc), 3368 (nsec) 00:10:01.483 00:10:01.483 real 0m1.653s 00:10:01.483 user 0m1.422s 00:10:01.483 sys 0m0.120s 00:10:01.483 07:06:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.483 07:06:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:01.483 ************************************ 00:10:01.483 END TEST thread_poller_perf 00:10:01.483 ************************************ 00:10:01.483 07:06:58 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:01.483 07:06:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:01.483 07:06:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.483 07:06:58 thread -- common/autotest_common.sh@10 -- # set +x 00:10:01.483 ************************************ 00:10:01.483 START TEST thread_poller_perf 00:10:01.483 ************************************ 00:10:01.483 07:06:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:01.483 [2024-11-20 07:06:58.541749] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:01.483 [2024-11-20 07:06:58.542146] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59609 ] 00:10:01.483 [2024-11-20 07:06:58.729544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.743 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:01.743 [2024-11-20 07:06:58.870919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.120 [2024-11-20T07:07:00.440Z] ====================================== 00:10:03.120 [2024-11-20T07:07:00.440Z] busy:2204639471 (cyc) 00:10:03.120 [2024-11-20T07:07:00.440Z] total_run_count: 3600000 00:10:03.120 [2024-11-20T07:07:00.440Z] tsc_hz: 2200000000 (cyc) 00:10:03.120 [2024-11-20T07:07:00.440Z] ====================================== 00:10:03.120 [2024-11-20T07:07:00.440Z] poller_cost: 612 (cyc), 278 (nsec) 00:10:03.120 00:10:03.120 real 0m1.604s 00:10:03.120 user 0m1.394s 00:10:03.120 sys 0m0.101s 00:10:03.120 07:07:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.120 07:07:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:03.120 ************************************ 00:10:03.120 END TEST thread_poller_perf 00:10:03.120 ************************************ 00:10:03.120 07:07:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:03.120 ************************************ 00:10:03.120 END TEST thread 00:10:03.120 ************************************ 00:10:03.120 00:10:03.120 real 0m3.539s 00:10:03.120 user 0m2.956s 00:10:03.120 sys 0m0.361s 00:10:03.120 07:07:00 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.120 07:07:00 thread -- common/autotest_common.sh@10 -- # set +x 00:10:03.120 07:07:00 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:03.120 07:07:00 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:03.120 07:07:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.120 07:07:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.120 07:07:00 -- common/autotest_common.sh@10 -- # set +x 00:10:03.120 ************************************ 00:10:03.120 START TEST app_cmdline 00:10:03.120 ************************************ 00:10:03.120 07:07:00 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:03.120 * Looking for test storage... 00:10:03.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:03.120 07:07:00 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:03.120 07:07:00 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:10:03.120 07:07:00 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:03.120 07:07:00 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:03.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:03.120 07:07:00 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:03.120 07:07:00 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:03.121 07:07:00 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:03.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.121 --rc genhtml_branch_coverage=1 00:10:03.121 --rc genhtml_function_coverage=1 00:10:03.121 --rc genhtml_legend=1 00:10:03.121 --rc geninfo_all_blocks=1 00:10:03.121 --rc geninfo_unexecuted_blocks=1 00:10:03.121 00:10:03.121 ' 00:10:03.121 07:07:00 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:03.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.121 --rc genhtml_branch_coverage=1 00:10:03.121 --rc genhtml_function_coverage=1 00:10:03.121 --rc genhtml_legend=1 00:10:03.121 --rc geninfo_all_blocks=1 00:10:03.121 --rc geninfo_unexecuted_blocks=1 00:10:03.121 00:10:03.121 ' 00:10:03.121 07:07:00 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:03.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.121 --rc genhtml_branch_coverage=1 00:10:03.121 --rc genhtml_function_coverage=1 00:10:03.121 --rc genhtml_legend=1 00:10:03.121 --rc geninfo_all_blocks=1 00:10:03.121 --rc geninfo_unexecuted_blocks=1 00:10:03.121 00:10:03.121 ' 00:10:03.121 07:07:00 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:03.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:03.121 --rc genhtml_branch_coverage=1 00:10:03.121 --rc genhtml_function_coverage=1 00:10:03.121 --rc genhtml_legend=1 00:10:03.121 --rc geninfo_all_blocks=1 00:10:03.121 --rc geninfo_unexecuted_blocks=1 00:10:03.121 00:10:03.121 ' 00:10:03.121 07:07:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:03.121 07:07:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59698 00:10:03.121 07:07:00 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:03.121 07:07:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59698 00:10:03.121 07:07:00 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59698 ']' 00:10:03.121 07:07:00 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.121 07:07:00 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.121 07:07:00 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.121 07:07:00 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.121 07:07:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:03.380 [2024-11-20 07:07:00.511900] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:03.380 [2024-11-20 07:07:00.512409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59698 ] 00:10:03.638 [2024-11-20 07:07:00.708661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.638 [2024-11-20 07:07:00.873191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.575 07:07:01 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:04.575 07:07:01 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:04.575 07:07:01 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:05.142 { 00:10:05.142 "version": "SPDK v25.01-pre git sha1 097b7c969", 00:10:05.142 "fields": { 00:10:05.142 "major": 25, 00:10:05.142 "minor": 1, 00:10:05.142 "patch": 0, 00:10:05.142 "suffix": "-pre", 00:10:05.142 "commit": "097b7c969" 00:10:05.142 } 00:10:05.142 } 00:10:05.142 07:07:02 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:05.142 07:07:02 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:05.142 07:07:02 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:05.142 07:07:02 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:05.142 07:07:02 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.143 07:07:02 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:05.143 07:07:02 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.143 07:07:02 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:05.143 07:07:02 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:05.143 07:07:02 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:05.143 07:07:02 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:05.402 request: 00:10:05.402 { 00:10:05.402 "method": "env_dpdk_get_mem_stats", 00:10:05.402 "req_id": 1 00:10:05.402 } 00:10:05.402 Got JSON-RPC error response 00:10:05.402 response: 00:10:05.402 { 00:10:05.402 "code": -32601, 00:10:05.402 "message": "Method not found" 00:10:05.402 } 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:05.402 07:07:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59698 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59698 ']' 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59698 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59698 00:10:05.402 killing process with pid 59698 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59698' 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@973 -- # kill 59698 00:10:05.402 07:07:02 app_cmdline -- common/autotest_common.sh@978 -- # wait 59698 00:10:07.935 00:10:07.935 real 0m4.774s 00:10:07.935 user 0m5.306s 00:10:07.935 sys 0m0.714s 00:10:07.935 07:07:04 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.935 07:07:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:07.935 ************************************ 00:10:07.935 END TEST app_cmdline 00:10:07.935 ************************************ 00:10:07.935 07:07:05 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:07.935 07:07:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.935 07:07:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.935 07:07:05 -- common/autotest_common.sh@10 -- # set +x 00:10:07.935 ************************************ 00:10:07.935 START TEST version 00:10:07.935 ************************************ 00:10:07.935 07:07:05 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:07.935 * Looking for test storage... 00:10:07.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:07.935 07:07:05 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:07.935 07:07:05 version -- common/autotest_common.sh@1693 -- # lcov --version 00:10:07.935 07:07:05 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:07.935 07:07:05 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:07.935 07:07:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.935 07:07:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.935 07:07:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.935 07:07:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.935 07:07:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.935 07:07:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.935 07:07:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.935 07:07:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.935 07:07:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.935 07:07:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.935 07:07:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.935 07:07:05 version -- scripts/common.sh@344 -- # case "$op" in 00:10:07.935 07:07:05 version -- scripts/common.sh@345 -- # : 1 00:10:07.935 07:07:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.935 07:07:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.935 07:07:05 version -- scripts/common.sh@365 -- # decimal 1 00:10:07.935 07:07:05 version -- scripts/common.sh@353 -- # local d=1 00:10:07.935 07:07:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.935 07:07:05 version -- scripts/common.sh@355 -- # echo 1 00:10:07.935 07:07:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.935 07:07:05 version -- scripts/common.sh@366 -- # decimal 2 00:10:07.935 07:07:05 version -- scripts/common.sh@353 -- # local d=2 00:10:07.935 07:07:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.935 07:07:05 version -- scripts/common.sh@355 -- # echo 2 00:10:07.935 07:07:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.935 07:07:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.935 07:07:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.935 07:07:05 version -- scripts/common.sh@368 -- # return 0 00:10:07.935 07:07:05 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.935 07:07:05 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:07.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.935 --rc genhtml_branch_coverage=1 00:10:07.935 --rc genhtml_function_coverage=1 00:10:07.935 --rc genhtml_legend=1 00:10:07.935 --rc geninfo_all_blocks=1 00:10:07.935 --rc geninfo_unexecuted_blocks=1 00:10:07.936 00:10:07.936 ' 00:10:07.936 07:07:05 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:07.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.936 --rc genhtml_branch_coverage=1 00:10:07.936 --rc genhtml_function_coverage=1 00:10:07.936 --rc genhtml_legend=1 00:10:07.936 --rc geninfo_all_blocks=1 00:10:07.936 --rc geninfo_unexecuted_blocks=1 00:10:07.936 00:10:07.936 ' 00:10:07.936 07:07:05 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:07.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.936 --rc genhtml_branch_coverage=1 00:10:07.936 --rc genhtml_function_coverage=1 00:10:07.936 --rc genhtml_legend=1 00:10:07.936 --rc geninfo_all_blocks=1 00:10:07.936 --rc geninfo_unexecuted_blocks=1 00:10:07.936 00:10:07.936 ' 00:10:07.936 07:07:05 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:07.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.936 --rc genhtml_branch_coverage=1 00:10:07.936 --rc genhtml_function_coverage=1 00:10:07.936 --rc genhtml_legend=1 00:10:07.936 --rc geninfo_all_blocks=1 00:10:07.936 --rc geninfo_unexecuted_blocks=1 00:10:07.936 00:10:07.936 ' 00:10:07.936 07:07:05 version -- app/version.sh@17 -- # get_header_version major 00:10:07.936 07:07:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:07.936 07:07:05 version -- app/version.sh@14 -- # cut -f2 00:10:07.936 07:07:05 version -- app/version.sh@14 -- # tr -d '"' 00:10:07.936 07:07:05 version -- app/version.sh@17 -- # major=25 00:10:07.936 07:07:05 version -- app/version.sh@18 -- # get_header_version minor 00:10:07.936 07:07:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:07.936 07:07:05 version -- app/version.sh@14 -- # cut -f2 00:10:07.936 07:07:05 version -- app/version.sh@14 -- # tr -d '"' 00:10:07.936 07:07:05 version -- app/version.sh@18 -- # minor=1 00:10:07.936 07:07:05 version -- app/version.sh@19 -- # get_header_version patch 00:10:07.936 07:07:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:07.936 07:07:05 version -- app/version.sh@14 -- # cut -f2 00:10:07.936 07:07:05 version -- app/version.sh@14 -- # tr -d '"' 00:10:07.936 07:07:05 version -- app/version.sh@19 -- # patch=0 00:10:07.936 07:07:05 version -- app/version.sh@20 -- # get_header_version suffix 00:10:07.936 07:07:05 version -- app/version.sh@14 -- # cut -f2 00:10:07.936 07:07:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:07.936 07:07:05 version -- app/version.sh@14 -- # tr -d '"' 00:10:07.936 07:07:05 version -- app/version.sh@20 -- # suffix=-pre 00:10:07.936 07:07:05 version -- app/version.sh@22 -- # version=25.1 00:10:07.936 07:07:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:07.936 07:07:05 version -- app/version.sh@28 -- # version=25.1rc0 00:10:07.936 07:07:05 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:07.936 07:07:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:08.194 07:07:05 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:08.194 07:07:05 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:08.194 00:10:08.194 real 0m0.275s 00:10:08.194 user 0m0.171s 00:10:08.194 sys 0m0.141s 00:10:08.194 ************************************ 00:10:08.194 END TEST version 00:10:08.194 ************************************ 00:10:08.195 07:07:05 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.195 07:07:05 version -- common/autotest_common.sh@10 -- # set +x 00:10:08.195 07:07:05 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:08.195 07:07:05 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:10:08.195 07:07:05 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:08.195 07:07:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.195 07:07:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.195 07:07:05 -- common/autotest_common.sh@10 -- # set +x 00:10:08.195 ************************************ 00:10:08.195 START TEST bdev_raid 00:10:08.195 ************************************ 00:10:08.195 07:07:05 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:10:08.195 * Looking for test storage... 00:10:08.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:08.195 07:07:05 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.195 07:07:05 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.195 07:07:05 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.453 07:07:05 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@345 -- # : 1 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.453 07:07:05 bdev_raid -- scripts/common.sh@368 -- # return 0 00:10:08.453 07:07:05 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.453 07:07:05 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.453 --rc genhtml_branch_coverage=1 00:10:08.453 --rc genhtml_function_coverage=1 00:10:08.453 --rc genhtml_legend=1 00:10:08.453 --rc geninfo_all_blocks=1 00:10:08.453 --rc geninfo_unexecuted_blocks=1 00:10:08.454 00:10:08.454 ' 00:10:08.454 07:07:05 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.454 --rc genhtml_branch_coverage=1 00:10:08.454 --rc genhtml_function_coverage=1 00:10:08.454 --rc genhtml_legend=1 00:10:08.454 --rc geninfo_all_blocks=1 00:10:08.454 --rc geninfo_unexecuted_blocks=1 00:10:08.454 00:10:08.454 ' 00:10:08.454 07:07:05 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.454 --rc genhtml_branch_coverage=1 00:10:08.454 --rc genhtml_function_coverage=1 00:10:08.454 --rc genhtml_legend=1 00:10:08.454 --rc geninfo_all_blocks=1 00:10:08.454 --rc geninfo_unexecuted_blocks=1 00:10:08.454 00:10:08.454 ' 00:10:08.454 07:07:05 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.454 --rc genhtml_branch_coverage=1 00:10:08.454 --rc genhtml_function_coverage=1 00:10:08.454 --rc genhtml_legend=1 00:10:08.454 --rc geninfo_all_blocks=1 00:10:08.454 --rc geninfo_unexecuted_blocks=1 00:10:08.454 00:10:08.454 ' 00:10:08.454 07:07:05 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:08.454 07:07:05 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:10:08.454 07:07:05 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:10:08.454 07:07:05 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:10:08.454 07:07:05 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:10:08.454 07:07:05 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:10:08.454 07:07:05 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:10:08.454 07:07:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.454 07:07:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.454 07:07:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.454 ************************************ 00:10:08.454 START TEST raid1_resize_data_offset_test 00:10:08.454 ************************************ 00:10:08.454 07:07:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:10:08.454 07:07:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59886 00:10:08.454 Process raid pid: 59886 00:10:08.454 07:07:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59886' 00:10:08.454 07:07:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:08.454 07:07:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59886 00:10:08.454 07:07:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59886 ']' 00:10:08.454 07:07:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.454 07:07:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.454 07:07:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.454 07:07:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.454 07:07:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.454 [2024-11-20 07:07:05.672008] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:08.454 [2024-11-20 07:07:05.672932] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.713 [2024-11-20 07:07:05.868461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.971 [2024-11-20 07:07:06.033629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.971 [2024-11-20 07:07:06.249480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.971 [2024-11-20 07:07:06.249569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.569 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.569 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:10:09.569 07:07:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:10:09.569 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.569 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.569 malloc0 00:10:09.569 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.569 07:07:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:10:09.569 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.569 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.829 malloc1 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.829 null0 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.829 [2024-11-20 07:07:06.957312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:10:09.829 [2024-11-20 07:07:06.959854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:09.829 [2024-11-20 07:07:06.959943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:10:09.829 [2024-11-20 07:07:06.960164] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:09.829 [2024-11-20 07:07:06.960207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:10:09.829 [2024-11-20 07:07:06.960566] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:09.829 [2024-11-20 07:07:06.960782] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:09.829 [2024-11-20 07:07:06.960815] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:09.829 [2024-11-20 07:07:06.961019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.829 07:07:06 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.829 07:07:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:10:09.829 07:07:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:10:09.829 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.829 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.829 [2024-11-20 07:07:07.017389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:10:09.829 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.829 07:07:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:10:09.829 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.829 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.397 malloc2 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.397 [2024-11-20 07:07:07.595570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:10.397 [2024-11-20 07:07:07.613852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.397 [2024-11-20 07:07:07.616413] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59886 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59886 ']' 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59886 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59886 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:10.397 killing process with pid 59886 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59886' 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59886 00:10:10.397 07:07:07 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59886 00:10:10.397 [2024-11-20 07:07:07.704962] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.397 [2024-11-20 07:07:07.705328] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:10:10.397 [2024-11-20 07:07:07.705399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.397 [2024-11-20 07:07:07.705425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:10:10.655 [2024-11-20 07:07:07.737956] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.655 [2024-11-20 07:07:07.738395] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.655 [2024-11-20 07:07:07.738430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:12.558 [2024-11-20 07:07:09.366450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.199 07:07:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:10:13.199 00:10:13.199 real 0m4.875s 00:10:13.199 user 0m4.876s 00:10:13.199 sys 0m0.710s 00:10:13.199 07:07:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.199 07:07:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.199 ************************************ 00:10:13.199 END TEST raid1_resize_data_offset_test 00:10:13.199 ************************************ 00:10:13.199 07:07:10 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:10:13.199 07:07:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:13.199 07:07:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.199 07:07:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:13.199 ************************************ 00:10:13.199 START TEST raid0_resize_superblock_test 00:10:13.199 ************************************ 00:10:13.199 07:07:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:10:13.199 07:07:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:10:13.199 07:07:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59975 00:10:13.199 Process raid pid: 59975 00:10:13.199 07:07:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59975' 00:10:13.199 07:07:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59975 00:10:13.199 07:07:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:13.199 07:07:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59975 ']' 00:10:13.199 07:07:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.199 07:07:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.199 07:07:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.199 07:07:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.199 07:07:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.459 [2024-11-20 07:07:10.603435] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:13.459 [2024-11-20 07:07:10.603627] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.718 [2024-11-20 07:07:10.793923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.718 [2024-11-20 07:07:10.926343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.978 [2024-11-20 07:07:11.140021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.978 [2024-11-20 07:07:11.140076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.545 07:07:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.545 07:07:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:14.545 07:07:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:10:14.545 07:07:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.545 07:07:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.810 malloc0 00:10:14.810 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.810 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:14.810 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.810 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.810 [2024-11-20 07:07:12.123564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:14.810 [2024-11-20 07:07:12.123643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.810 [2024-11-20 07:07:12.123673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:14.810 [2024-11-20 07:07:12.123693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.069 [2024-11-20 07:07:12.126481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.069 [2024-11-20 07:07:12.126531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:15.069 pt0 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.069 0a403b4f-71d5-4bc4-b6d8-1a929e1b495f 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.069 6036d495-11d1-4c38-b3d8-ab57eabae9c0 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.069 fe268d0e-3b9d-4acb-967c-ae19571df56f 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.069 [2024-11-20 07:07:12.279289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6036d495-11d1-4c38-b3d8-ab57eabae9c0 is claimed 00:10:15.069 [2024-11-20 07:07:12.279439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fe268d0e-3b9d-4acb-967c-ae19571df56f is claimed 00:10:15.069 [2024-11-20 07:07:12.279633] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:15.069 [2024-11-20 07:07:12.279669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:10:15.069 [2024-11-20 07:07:12.280042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:15.069 [2024-11-20 07:07:12.280298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:15.069 [2024-11-20 07:07:12.280324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:15.069 [2024-11-20 07:07:12.280563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.069 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 [2024-11-20 07:07:12.391617] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 [2024-11-20 07:07:12.435651] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:15.329 [2024-11-20 07:07:12.435694] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '6036d495-11d1-4c38-b3d8-ab57eabae9c0' was resized: old size 131072, new size 204800 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 [2024-11-20 07:07:12.443414] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:15.329 [2024-11-20 07:07:12.443460] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fe268d0e-3b9d-4acb-967c-ae19571df56f' was resized: old size 131072, new size 204800 00:10:15.329 [2024-11-20 07:07:12.443504] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:10:15.329 [2024-11-20 07:07:12.555645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 [2024-11-20 07:07:12.603332] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:10:15.329 [2024-11-20 07:07:12.603431] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:10:15.329 [2024-11-20 07:07:12.603451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.329 [2024-11-20 07:07:12.603476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:10:15.329 [2024-11-20 07:07:12.603656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.329 [2024-11-20 07:07:12.603718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.329 [2024-11-20 07:07:12.603739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 [2024-11-20 07:07:12.611261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:15.329 [2024-11-20 07:07:12.611327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.329 [2024-11-20 07:07:12.611356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:15.329 [2024-11-20 07:07:12.611373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.329 [2024-11-20 07:07:12.614267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.329 [2024-11-20 07:07:12.614330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:15.329 pt0 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:10:15.329 [2024-11-20 07:07:12.616616] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 6036d495-11d1-4c38-b3d8-ab57eabae9c0 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.329 [2024-11-20 07:07:12.616706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 6036d495-11d1-4c38-b3d8-ab57eabae9c0 is claimed 00:10:15.329 [2024-11-20 07:07:12.616845] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fe268d0e-3b9d-4acb-967c-ae19571df56f 00:10:15.329 [2024-11-20 07:07:12.616896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fe268d0e-3b9d-4acb-967c-ae19571df56f is claimed 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 [2024-11-20 07:07:12.617051] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev fe268d0e-3b9d-4acb-967c-ae19571df56f (2) smaller than existing raid bdev Raid (3) 00:10:15.329 [2024-11-20 07:07:12.617085] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 6036d495-11d1-4c38-b3d8-ab57eabae9c0: File exists 00:10:15.329 [2024-11-20 07:07:12.617140] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:15.329 [2024-11-20 07:07:12.617163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:10:15.329 [2024-11-20 07:07:12.617466] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:15.329 [2024-11-20 07:07:12.617666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:15.329 [2024-11-20 07:07:12.617689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:10:15.329 [2024-11-20 07:07:12.617889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:15.329 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:10:15.329 [2024-11-20 07:07:12.631565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59975 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59975 ']' 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59975 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59975 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.588 killing process with pid 59975 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59975' 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59975 00:10:15.588 [2024-11-20 07:07:12.708855] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.588 07:07:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59975 00:10:15.588 [2024-11-20 07:07:12.708964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.588 [2024-11-20 07:07:12.709032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.588 [2024-11-20 07:07:12.709048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:10:16.998 [2024-11-20 07:07:14.059747] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.940 07:07:15 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:10:17.940 00:10:17.940 real 0m4.672s 00:10:17.940 user 0m4.922s 00:10:17.940 sys 0m0.670s 00:10:17.940 07:07:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.940 07:07:15 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.940 ************************************ 00:10:17.940 END TEST raid0_resize_superblock_test 00:10:17.940 ************************************ 00:10:17.940 07:07:15 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:10:17.940 07:07:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.940 07:07:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.940 07:07:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.940 ************************************ 00:10:17.940 START TEST raid1_resize_superblock_test 00:10:17.940 ************************************ 00:10:17.940 07:07:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:10:17.940 07:07:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:10:17.940 07:07:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60079 00:10:17.940 Process raid pid: 60079 00:10:17.940 07:07:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60079' 00:10:17.940 07:07:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60079 00:10:17.940 07:07:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:17.940 07:07:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60079 ']' 00:10:17.940 07:07:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.940 07:07:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.940 07:07:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.940 07:07:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.940 07:07:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.199 [2024-11-20 07:07:15.329311] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:18.199 [2024-11-20 07:07:15.329493] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.199 [2024-11-20 07:07:15.510120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.458 [2024-11-20 07:07:15.647607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.716 [2024-11-20 07:07:15.871313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.716 [2024-11-20 07:07:15.871367] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.975 07:07:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.975 07:07:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:18.975 07:07:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:10:18.975 07:07:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.975 07:07:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.541 malloc0 00:10:19.541 07:07:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.541 07:07:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:19.799 07:07:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.799 07:07:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.799 [2024-11-20 07:07:16.865850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:19.799 [2024-11-20 07:07:16.865941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.799 [2024-11-20 07:07:16.865974] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:19.799 [2024-11-20 07:07:16.865996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.799 [2024-11-20 07:07:16.868903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.799 [2024-11-20 07:07:16.868984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:19.799 pt0 00:10:19.799 07:07:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.799 07:07:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:10:19.799 07:07:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.799 07:07:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.799 d190e41a-0f8a-4985-bf87-e4b81242e472 00:10:19.799 07:07:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.799 07:07:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:10:19.799 07:07:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.799 07:07:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.799 3d79d2d0-c868-4baf-adf7-5627b96429ec 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.799 162a2b2f-29b5-41c9-b110-db784bdeabb0 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.799 [2024-11-20 07:07:17.021587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3d79d2d0-c868-4baf-adf7-5627b96429ec is claimed 00:10:19.799 [2024-11-20 07:07:17.021721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 162a2b2f-29b5-41c9-b110-db784bdeabb0 is claimed 00:10:19.799 [2024-11-20 07:07:17.021929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:19.799 [2024-11-20 07:07:17.021956] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:10:19.799 [2024-11-20 07:07:17.022307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:19.799 [2024-11-20 07:07:17.022573] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:19.799 [2024-11-20 07:07:17.022600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:19.799 [2024-11-20 07:07:17.022793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:10:19.799 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 [2024-11-20 07:07:17.133999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 [2024-11-20 07:07:17.173956] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:20.058 [2024-11-20 07:07:17.173996] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3d79d2d0-c868-4baf-adf7-5627b96429ec' was resized: old size 131072, new size 204800 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 [2024-11-20 07:07:17.181801] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:20.058 [2024-11-20 07:07:17.181837] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '162a2b2f-29b5-41c9-b110-db784bdeabb0' was resized: old size 131072, new size 204800 00:10:20.058 [2024-11-20 07:07:17.181893] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:10:20.058 [2024-11-20 07:07:17.309962] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 [2024-11-20 07:07:17.361672] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:10:20.058 [2024-11-20 07:07:17.361798] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:10:20.058 [2024-11-20 07:07:17.361851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:10:20.058 [2024-11-20 07:07:17.362071] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.058 [2024-11-20 07:07:17.362332] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.058 [2024-11-20 07:07:17.362439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.058 [2024-11-20 07:07:17.362462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 [2024-11-20 07:07:17.369571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:10:20.058 [2024-11-20 07:07:17.369652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.058 [2024-11-20 07:07:17.369680] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:10:20.058 [2024-11-20 07:07:17.369699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.058 [2024-11-20 07:07:17.372663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.058 [2024-11-20 07:07:17.372729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:10:20.058 pt0 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.058 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.058 [2024-11-20 07:07:17.375141] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3d79d2d0-c868-4baf-adf7-5627b96429ec 00:10:20.317 [2024-11-20 07:07:17.375221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3d79d2d0-c868-4baf-adf7-5627b96429ec is claimed 00:10:20.317 [2024-11-20 07:07:17.375391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 162a2b2f-29b5-41c9-b110-db784bdeabb0 00:10:20.317 [2024-11-20 07:07:17.375426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 162a2b2f-29b5-41c9-b110-db784bdeabb0 is claimed 00:10:20.317 [2024-11-20 07:07:17.375575] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 162a2b2f-29b5-41c9-b110-db784bdeabb0 (2) smaller than existing raid bdev Raid (3) 00:10:20.317 [2024-11-20 07:07:17.375605] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 3d79d2d0-c868-4baf-adf7-5627b96429ec: File exists 00:10:20.317 [2024-11-20 07:07:17.375664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:20.317 [2024-11-20 07:07:17.375683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:20.317 [2024-11-20 07:07:17.376022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:20.317 [2024-11-20 07:07:17.376234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:20.317 [2024-11-20 07:07:17.376258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:10:20.317 [2024-11-20 07:07:17.376442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:10:20.317 [2024-11-20 07:07:17.389975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60079 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60079 ']' 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60079 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60079 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:20.317 killing process with pid 60079 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60079' 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60079 00:10:20.317 [2024-11-20 07:07:17.472123] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:20.317 07:07:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60079 00:10:20.317 [2024-11-20 07:07:17.472238] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.317 [2024-11-20 07:07:17.472310] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.317 [2024-11-20 07:07:17.472325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:10:21.758 [2024-11-20 07:07:18.869848] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:22.692 07:07:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:10:22.692 00:10:22.692 real 0m4.722s 00:10:22.692 user 0m4.980s 00:10:22.692 sys 0m0.663s 00:10:22.692 07:07:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.692 07:07:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.692 ************************************ 00:10:22.692 END TEST raid1_resize_superblock_test 00:10:22.692 ************************************ 00:10:22.692 07:07:19 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:10:22.692 07:07:19 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:10:22.692 07:07:19 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:10:22.692 07:07:19 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:10:22.692 07:07:19 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:10:22.692 07:07:19 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:10:22.692 07:07:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:22.692 07:07:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.692 07:07:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.951 ************************************ 00:10:22.951 START TEST raid_function_test_raid0 00:10:22.951 ************************************ 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60176 00:10:22.951 Process raid pid: 60176 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60176' 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60176 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60176 ']' 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.951 07:07:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:22.951 [2024-11-20 07:07:20.102311] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:22.952 [2024-11-20 07:07:20.102526] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.210 [2024-11-20 07:07:20.282374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.210 [2024-11-20 07:07:20.415585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.468 [2024-11-20 07:07:20.628474] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.468 [2024-11-20 07:07:20.628547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.035 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.035 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:10:24.035 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:10:24.035 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.035 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:24.035 Base_1 00:10:24.035 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.035 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:10:24.035 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.035 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:24.035 Base_2 00:10:24.035 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.035 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:10:24.035 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.035 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:24.036 [2024-11-20 07:07:21.230977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:24.036 [2024-11-20 07:07:21.233622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:24.036 [2024-11-20 07:07:21.233727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:24.036 [2024-11-20 07:07:21.233746] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:24.036 [2024-11-20 07:07:21.234100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:24.036 [2024-11-20 07:07:21.234305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:24.036 [2024-11-20 07:07:21.234330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:10:24.036 [2024-11-20 07:07:21.234509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:24.036 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:10:24.294 [2024-11-20 07:07:21.587123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:24.294 /dev/nbd0 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:24.552 1+0 records in 00:10:24.552 1+0 records out 00:10:24.552 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242006 s, 16.9 MB/s 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:24.552 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:24.810 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:24.810 { 00:10:24.810 "nbd_device": "/dev/nbd0", 00:10:24.810 "bdev_name": "raid" 00:10:24.810 } 00:10:24.810 ]' 00:10:24.810 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:24.810 { 00:10:24.810 "nbd_device": "/dev/nbd0", 00:10:24.810 "bdev_name": "raid" 00:10:24.810 } 00:10:24.810 ]' 00:10:24.810 07:07:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:24.810 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:10:24.811 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:10:24.811 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:10:24.811 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:10:24.811 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:10:24.811 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:10:24.811 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:10:24.811 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:10:24.811 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:10:24.811 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:10:24.811 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:24.811 4096+0 records in 00:10:24.811 4096+0 records out 00:10:24.811 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0292688 s, 71.7 MB/s 00:10:24.811 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:25.378 4096+0 records in 00:10:25.378 4096+0 records out 00:10:25.378 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.341056 s, 6.1 MB/s 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:25.378 128+0 records in 00:10:25.378 128+0 records out 00:10:25.378 65536 bytes (66 kB, 64 KiB) copied, 0.00112793 s, 58.1 MB/s 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:25.378 2035+0 records in 00:10:25.378 2035+0 records out 00:10:25.378 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0127327 s, 81.8 MB/s 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:25.378 456+0 records in 00:10:25.378 456+0 records out 00:10:25.378 233472 bytes (233 kB, 228 KiB) copied, 0.00268205 s, 87.0 MB/s 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:25.378 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:25.637 [2024-11-20 07:07:22.855129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.637 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:25.637 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:25.637 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:25.637 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:25.637 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:25.637 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:25.637 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:10:25.637 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:10:25.637 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:10:25.637 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:25.637 07:07:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60176 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60176 ']' 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60176 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:10:25.895 07:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.896 07:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60176 00:10:26.154 07:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.154 07:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.154 killing process with pid 60176 00:10:26.154 07:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60176' 00:10:26.154 07:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60176 00:10:26.154 [2024-11-20 07:07:23.221730] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.154 [2024-11-20 07:07:23.221847] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.154 [2024-11-20 07:07:23.221925] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.154 [2024-11-20 07:07:23.221949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:10:26.154 07:07:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60176 00:10:26.154 [2024-11-20 07:07:23.413974] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.529 07:07:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:10:27.529 00:10:27.529 real 0m4.464s 00:10:27.529 user 0m5.567s 00:10:27.529 sys 0m1.018s 00:10:27.529 07:07:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.529 ************************************ 00:10:27.529 END TEST raid_function_test_raid0 00:10:27.529 ************************************ 00:10:27.529 07:07:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 07:07:24 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:10:27.529 07:07:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.529 07:07:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.529 07:07:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.529 ************************************ 00:10:27.529 START TEST raid_function_test_concat 00:10:27.529 ************************************ 00:10:27.529 07:07:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:10:27.529 07:07:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:10:27.529 07:07:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:10:27.529 07:07:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:10:27.529 07:07:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60311 00:10:27.529 Process raid pid: 60311 00:10:27.529 07:07:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60311' 00:10:27.529 07:07:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:27.529 07:07:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60311 00:10:27.529 07:07:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60311 ']' 00:10:27.530 07:07:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.530 07:07:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.530 07:07:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.530 07:07:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.530 07:07:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:27.530 [2024-11-20 07:07:24.637852] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:27.530 [2024-11-20 07:07:24.638080] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.530 [2024-11-20 07:07:24.817077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.789 [2024-11-20 07:07:24.943504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.048 [2024-11-20 07:07:25.153020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.048 [2024-11-20 07:07:25.153113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.306 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.306 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:10:28.306 07:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:10:28.306 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.306 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:28.565 Base_1 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:28.565 Base_2 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:28.565 [2024-11-20 07:07:25.676222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:28.565 [2024-11-20 07:07:25.678661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:28.565 [2024-11-20 07:07:25.678770] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:28.565 [2024-11-20 07:07:25.678790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:28.565 [2024-11-20 07:07:25.679167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:28.565 [2024-11-20 07:07:25.679384] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:28.565 [2024-11-20 07:07:25.679412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:10:28.565 [2024-11-20 07:07:25.679604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:28.565 07:07:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:10:28.824 [2024-11-20 07:07:26.000412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:28.824 /dev/nbd0 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:28.824 1+0 records in 00:10:28.824 1+0 records out 00:10:28.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242299 s, 16.9 MB/s 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:28.824 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:29.083 { 00:10:29.083 "nbd_device": "/dev/nbd0", 00:10:29.083 "bdev_name": "raid" 00:10:29.083 } 00:10:29.083 ]' 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:29.083 { 00:10:29.083 "nbd_device": "/dev/nbd0", 00:10:29.083 "bdev_name": "raid" 00:10:29.083 } 00:10:29.083 ]' 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:10:29.083 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:10:29.084 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:10:29.084 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:10:29.084 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:10:29.084 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:10:29.084 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:10:29.084 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:10:29.084 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:10:29.084 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:10:29.084 4096+0 records in 00:10:29.084 4096+0 records out 00:10:29.084 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0254751 s, 82.3 MB/s 00:10:29.084 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:10:29.650 4096+0 records in 00:10:29.650 4096+0 records out 00:10:29.650 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.323709 s, 6.5 MB/s 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:10:29.650 128+0 records in 00:10:29.650 128+0 records out 00:10:29.650 65536 bytes (66 kB, 64 KiB) copied, 0.0011431 s, 57.3 MB/s 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:10:29.650 2035+0 records in 00:10:29.650 2035+0 records out 00:10:29.650 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0112817 s, 92.4 MB/s 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:10:29.650 456+0 records in 00:10:29.650 456+0 records out 00:10:29.650 233472 bytes (233 kB, 228 KiB) copied, 0.00342071 s, 68.3 MB/s 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:29.650 07:07:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:29.910 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:29.910 [2024-11-20 07:07:27.133910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.910 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:29.910 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:29.910 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:29.910 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:29.910 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:29.910 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:10:29.910 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:10:29.910 07:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:10:29.910 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:10:29.910 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60311 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60311 ']' 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60311 00:10:30.169 07:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:10:30.428 07:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.428 07:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60311 00:10:30.428 killing process with pid 60311 00:10:30.428 07:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.428 07:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.428 07:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60311' 00:10:30.428 07:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60311 00:10:30.428 [2024-11-20 07:07:27.519842] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:30.428 07:07:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60311 00:10:30.428 [2024-11-20 07:07:27.520018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.428 [2024-11-20 07:07:27.520089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.428 [2024-11-20 07:07:27.520124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:10:30.428 [2024-11-20 07:07:27.718299] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:31.809 ************************************ 00:10:31.809 END TEST raid_function_test_concat 00:10:31.809 ************************************ 00:10:31.809 07:07:28 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:10:31.809 00:10:31.809 real 0m4.264s 00:10:31.809 user 0m5.206s 00:10:31.809 sys 0m0.981s 00:10:31.809 07:07:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.809 07:07:28 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:10:31.810 07:07:28 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:10:31.810 07:07:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.810 07:07:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.810 07:07:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:31.810 ************************************ 00:10:31.810 START TEST raid0_resize_test 00:10:31.810 ************************************ 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:31.810 Process raid pid: 60440 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60440 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60440' 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60440 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60440 ']' 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.810 07:07:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.810 [2024-11-20 07:07:28.965274] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:31.810 [2024-11-20 07:07:28.966599] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.073 [2024-11-20 07:07:29.163308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.073 [2024-11-20 07:07:29.303538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.338 [2024-11-20 07:07:29.520675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.338 [2024-11-20 07:07:29.520736] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.912 Base_1 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.912 Base_2 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.912 [2024-11-20 07:07:29.992387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:32.912 [2024-11-20 07:07:29.995006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:32.912 [2024-11-20 07:07:29.995078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:32.912 [2024-11-20 07:07:29.995094] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:32.912 [2024-11-20 07:07:29.995478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:32.912 [2024-11-20 07:07:29.995677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:32.912 [2024-11-20 07:07:29.995691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:32.912 [2024-11-20 07:07:29.995873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.912 07:07:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.912 [2024-11-20 07:07:30.000395] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:32.912 [2024-11-20 07:07:30.000427] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:32.912 true 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.912 [2024-11-20 07:07:30.012671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.912 [2024-11-20 07:07:30.072464] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:32.912 [2024-11-20 07:07:30.072701] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:32.912 [2024-11-20 07:07:30.072773] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:10:32.912 true 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.912 [2024-11-20 07:07:30.084689] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60440 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60440 ']' 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60440 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60440 00:10:32.912 killing process with pid 60440 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60440' 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60440 00:10:32.912 [2024-11-20 07:07:30.165724] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.912 07:07:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60440 00:10:32.912 [2024-11-20 07:07:30.165852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.912 [2024-11-20 07:07:30.165934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.912 [2024-11-20 07:07:30.165948] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:32.912 [2024-11-20 07:07:30.182211] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.289 07:07:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:34.289 00:10:34.289 real 0m2.426s 00:10:34.289 user 0m2.689s 00:10:34.289 sys 0m0.403s 00:10:34.289 ************************************ 00:10:34.289 END TEST raid0_resize_test 00:10:34.289 ************************************ 00:10:34.289 07:07:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.289 07:07:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.289 07:07:31 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:10:34.289 07:07:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:34.289 07:07:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.289 07:07:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.289 ************************************ 00:10:34.289 START TEST raid1_resize_test 00:10:34.289 ************************************ 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:10:34.289 Process raid pid: 60501 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60501 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60501' 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60501 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60501 ']' 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.289 07:07:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.289 [2024-11-20 07:07:31.447730] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:34.289 [2024-11-20 07:07:31.448007] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.548 [2024-11-20 07:07:31.639673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:34.548 [2024-11-20 07:07:31.780367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.807 [2024-11-20 07:07:31.990542] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.807 [2024-11-20 07:07:31.990621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.375 Base_1 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.375 Base_2 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.375 [2024-11-20 07:07:32.510831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:10:35.375 [2024-11-20 07:07:32.513296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:10:35.375 [2024-11-20 07:07:32.513522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:35.375 [2024-11-20 07:07:32.513553] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:35.375 [2024-11-20 07:07:32.513914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:10:35.375 [2024-11-20 07:07:32.514109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:35.375 [2024-11-20 07:07:32.514124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:10:35.375 [2024-11-20 07:07:32.514370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.375 [2024-11-20 07:07:32.518836] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:35.375 [2024-11-20 07:07:32.518874] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:10:35.375 true 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:35.375 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:10:35.376 [2024-11-20 07:07:32.531045] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.376 [2024-11-20 07:07:32.582886] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:10:35.376 [2024-11-20 07:07:32.582925] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:10:35.376 [2024-11-20 07:07:32.582965] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:10:35.376 true 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.376 [2024-11-20 07:07:32.595094] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60501 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60501 ']' 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60501 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60501 00:10:35.376 killing process with pid 60501 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60501' 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60501 00:10:35.376 [2024-11-20 07:07:32.676777] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.376 07:07:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60501 00:10:35.376 [2024-11-20 07:07:32.676897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.376 [2024-11-20 07:07:32.677507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.376 [2024-11-20 07:07:32.677538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:10:35.635 [2024-11-20 07:07:32.693013] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:36.570 07:07:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:10:36.570 00:10:36.570 real 0m2.435s 00:10:36.570 user 0m2.720s 00:10:36.570 sys 0m0.415s 00:10:36.570 07:07:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.570 07:07:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.570 ************************************ 00:10:36.570 END TEST raid1_resize_test 00:10:36.570 ************************************ 00:10:36.570 07:07:33 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:36.570 07:07:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:36.570 07:07:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:10:36.570 07:07:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:36.570 07:07:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.570 07:07:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:36.570 ************************************ 00:10:36.570 START TEST raid_state_function_test 00:10:36.570 ************************************ 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:36.570 Process raid pid: 60564 00:10:36.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60564 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60564' 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60564 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60564 ']' 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:36.570 07:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.829 [2024-11-20 07:07:33.940156] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:36.829 [2024-11-20 07:07:33.940569] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.829 [2024-11-20 07:07:34.125789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.100 [2024-11-20 07:07:34.268826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.386 [2024-11-20 07:07:34.485434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.387 [2024-11-20 07:07:34.485626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.645 07:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:37.645 07:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:37.645 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:37.645 07:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.645 07:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.645 [2024-11-20 07:07:34.912510] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.645 [2024-11-20 07:07:34.912575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.645 [2024-11-20 07:07:34.912607] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.645 [2024-11-20 07:07:34.912623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.646 07:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.905 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.905 "name": "Existed_Raid", 00:10:37.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.905 "strip_size_kb": 64, 00:10:37.905 "state": "configuring", 00:10:37.905 "raid_level": "raid0", 00:10:37.905 "superblock": false, 00:10:37.905 "num_base_bdevs": 2, 00:10:37.905 "num_base_bdevs_discovered": 0, 00:10:37.905 "num_base_bdevs_operational": 2, 00:10:37.905 "base_bdevs_list": [ 00:10:37.905 { 00:10:37.905 "name": "BaseBdev1", 00:10:37.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.905 "is_configured": false, 00:10:37.905 "data_offset": 0, 00:10:37.905 "data_size": 0 00:10:37.905 }, 00:10:37.905 { 00:10:37.905 "name": "BaseBdev2", 00:10:37.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.905 "is_configured": false, 00:10:37.905 "data_offset": 0, 00:10:37.905 "data_size": 0 00:10:37.905 } 00:10:37.905 ] 00:10:37.905 }' 00:10:37.905 07:07:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.905 07:07:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.163 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.163 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.163 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.163 [2024-11-20 07:07:35.436648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.163 [2024-11-20 07:07:35.436859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:38.163 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.163 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:38.163 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.163 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.163 [2024-11-20 07:07:35.444588] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.163 [2024-11-20 07:07:35.444676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.163 [2024-11-20 07:07:35.444700] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.163 [2024-11-20 07:07:35.444717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.163 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.163 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.163 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.163 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.422 [2024-11-20 07:07:35.491012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.422 BaseBdev1 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.422 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.422 [ 00:10:38.422 { 00:10:38.422 "name": "BaseBdev1", 00:10:38.422 "aliases": [ 00:10:38.422 "077025a3-a184-42ed-9240-057e714797ca" 00:10:38.422 ], 00:10:38.422 "product_name": "Malloc disk", 00:10:38.422 "block_size": 512, 00:10:38.422 "num_blocks": 65536, 00:10:38.422 "uuid": "077025a3-a184-42ed-9240-057e714797ca", 00:10:38.422 "assigned_rate_limits": { 00:10:38.422 "rw_ios_per_sec": 0, 00:10:38.422 "rw_mbytes_per_sec": 0, 00:10:38.422 "r_mbytes_per_sec": 0, 00:10:38.422 "w_mbytes_per_sec": 0 00:10:38.422 }, 00:10:38.422 "claimed": true, 00:10:38.423 "claim_type": "exclusive_write", 00:10:38.423 "zoned": false, 00:10:38.423 "supported_io_types": { 00:10:38.423 "read": true, 00:10:38.423 "write": true, 00:10:38.423 "unmap": true, 00:10:38.423 "flush": true, 00:10:38.423 "reset": true, 00:10:38.423 "nvme_admin": false, 00:10:38.423 "nvme_io": false, 00:10:38.423 "nvme_io_md": false, 00:10:38.423 "write_zeroes": true, 00:10:38.423 "zcopy": true, 00:10:38.423 "get_zone_info": false, 00:10:38.423 "zone_management": false, 00:10:38.423 "zone_append": false, 00:10:38.423 "compare": false, 00:10:38.423 "compare_and_write": false, 00:10:38.423 "abort": true, 00:10:38.423 "seek_hole": false, 00:10:38.423 "seek_data": false, 00:10:38.423 "copy": true, 00:10:38.423 "nvme_iov_md": false 00:10:38.423 }, 00:10:38.423 "memory_domains": [ 00:10:38.423 { 00:10:38.423 "dma_device_id": "system", 00:10:38.423 "dma_device_type": 1 00:10:38.423 }, 00:10:38.423 { 00:10:38.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.423 "dma_device_type": 2 00:10:38.423 } 00:10:38.423 ], 00:10:38.423 "driver_specific": {} 00:10:38.423 } 00:10:38.423 ] 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.423 "name": "Existed_Raid", 00:10:38.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.423 "strip_size_kb": 64, 00:10:38.423 "state": "configuring", 00:10:38.423 "raid_level": "raid0", 00:10:38.423 "superblock": false, 00:10:38.423 "num_base_bdevs": 2, 00:10:38.423 "num_base_bdevs_discovered": 1, 00:10:38.423 "num_base_bdevs_operational": 2, 00:10:38.423 "base_bdevs_list": [ 00:10:38.423 { 00:10:38.423 "name": "BaseBdev1", 00:10:38.423 "uuid": "077025a3-a184-42ed-9240-057e714797ca", 00:10:38.423 "is_configured": true, 00:10:38.423 "data_offset": 0, 00:10:38.423 "data_size": 65536 00:10:38.423 }, 00:10:38.423 { 00:10:38.423 "name": "BaseBdev2", 00:10:38.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.423 "is_configured": false, 00:10:38.423 "data_offset": 0, 00:10:38.423 "data_size": 0 00:10:38.423 } 00:10:38.423 ] 00:10:38.423 }' 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.423 07:07:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.990 [2024-11-20 07:07:36.051273] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.990 [2024-11-20 07:07:36.051335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.990 [2024-11-20 07:07:36.059303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.990 [2024-11-20 07:07:36.061838] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.990 [2024-11-20 07:07:36.061906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.990 "name": "Existed_Raid", 00:10:38.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.990 "strip_size_kb": 64, 00:10:38.990 "state": "configuring", 00:10:38.990 "raid_level": "raid0", 00:10:38.990 "superblock": false, 00:10:38.990 "num_base_bdevs": 2, 00:10:38.990 "num_base_bdevs_discovered": 1, 00:10:38.990 "num_base_bdevs_operational": 2, 00:10:38.990 "base_bdevs_list": [ 00:10:38.990 { 00:10:38.990 "name": "BaseBdev1", 00:10:38.990 "uuid": "077025a3-a184-42ed-9240-057e714797ca", 00:10:38.990 "is_configured": true, 00:10:38.990 "data_offset": 0, 00:10:38.990 "data_size": 65536 00:10:38.990 }, 00:10:38.990 { 00:10:38.990 "name": "BaseBdev2", 00:10:38.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.990 "is_configured": false, 00:10:38.990 "data_offset": 0, 00:10:38.990 "data_size": 0 00:10:38.990 } 00:10:38.990 ] 00:10:38.990 }' 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.990 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.557 [2024-11-20 07:07:36.612445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.557 [2024-11-20 07:07:36.612794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:39.557 [2024-11-20 07:07:36.612819] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:10:39.557 [2024-11-20 07:07:36.613262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:39.557 [2024-11-20 07:07:36.613506] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:39.557 [2024-11-20 07:07:36.613528] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:39.557 [2024-11-20 07:07:36.613840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.557 BaseBdev2 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.557 [ 00:10:39.557 { 00:10:39.557 "name": "BaseBdev2", 00:10:39.557 "aliases": [ 00:10:39.557 "68f5269f-ea6b-4892-bf1e-4c30200bbbb8" 00:10:39.557 ], 00:10:39.557 "product_name": "Malloc disk", 00:10:39.557 "block_size": 512, 00:10:39.557 "num_blocks": 65536, 00:10:39.557 "uuid": "68f5269f-ea6b-4892-bf1e-4c30200bbbb8", 00:10:39.557 "assigned_rate_limits": { 00:10:39.557 "rw_ios_per_sec": 0, 00:10:39.557 "rw_mbytes_per_sec": 0, 00:10:39.557 "r_mbytes_per_sec": 0, 00:10:39.557 "w_mbytes_per_sec": 0 00:10:39.557 }, 00:10:39.557 "claimed": true, 00:10:39.557 "claim_type": "exclusive_write", 00:10:39.557 "zoned": false, 00:10:39.557 "supported_io_types": { 00:10:39.557 "read": true, 00:10:39.557 "write": true, 00:10:39.557 "unmap": true, 00:10:39.557 "flush": true, 00:10:39.557 "reset": true, 00:10:39.557 "nvme_admin": false, 00:10:39.557 "nvme_io": false, 00:10:39.557 "nvme_io_md": false, 00:10:39.557 "write_zeroes": true, 00:10:39.557 "zcopy": true, 00:10:39.557 "get_zone_info": false, 00:10:39.557 "zone_management": false, 00:10:39.557 "zone_append": false, 00:10:39.557 "compare": false, 00:10:39.557 "compare_and_write": false, 00:10:39.557 "abort": true, 00:10:39.557 "seek_hole": false, 00:10:39.557 "seek_data": false, 00:10:39.557 "copy": true, 00:10:39.557 "nvme_iov_md": false 00:10:39.557 }, 00:10:39.557 "memory_domains": [ 00:10:39.557 { 00:10:39.557 "dma_device_id": "system", 00:10:39.557 "dma_device_type": 1 00:10:39.557 }, 00:10:39.557 { 00:10:39.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.557 "dma_device_type": 2 00:10:39.557 } 00:10:39.557 ], 00:10:39.557 "driver_specific": {} 00:10:39.557 } 00:10:39.557 ] 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.557 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.558 "name": "Existed_Raid", 00:10:39.558 "uuid": "246a9164-5070-4fb9-8cc7-fca20d2388d8", 00:10:39.558 "strip_size_kb": 64, 00:10:39.558 "state": "online", 00:10:39.558 "raid_level": "raid0", 00:10:39.558 "superblock": false, 00:10:39.558 "num_base_bdevs": 2, 00:10:39.558 "num_base_bdevs_discovered": 2, 00:10:39.558 "num_base_bdevs_operational": 2, 00:10:39.558 "base_bdevs_list": [ 00:10:39.558 { 00:10:39.558 "name": "BaseBdev1", 00:10:39.558 "uuid": "077025a3-a184-42ed-9240-057e714797ca", 00:10:39.558 "is_configured": true, 00:10:39.558 "data_offset": 0, 00:10:39.558 "data_size": 65536 00:10:39.558 }, 00:10:39.558 { 00:10:39.558 "name": "BaseBdev2", 00:10:39.558 "uuid": "68f5269f-ea6b-4892-bf1e-4c30200bbbb8", 00:10:39.558 "is_configured": true, 00:10:39.558 "data_offset": 0, 00:10:39.558 "data_size": 65536 00:10:39.558 } 00:10:39.558 ] 00:10:39.558 }' 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.558 07:07:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.125 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.125 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.125 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.125 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.125 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.125 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.125 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.125 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.125 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.125 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.125 [2024-11-20 07:07:37.177078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.125 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.125 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.125 "name": "Existed_Raid", 00:10:40.125 "aliases": [ 00:10:40.125 "246a9164-5070-4fb9-8cc7-fca20d2388d8" 00:10:40.125 ], 00:10:40.126 "product_name": "Raid Volume", 00:10:40.126 "block_size": 512, 00:10:40.126 "num_blocks": 131072, 00:10:40.126 "uuid": "246a9164-5070-4fb9-8cc7-fca20d2388d8", 00:10:40.126 "assigned_rate_limits": { 00:10:40.126 "rw_ios_per_sec": 0, 00:10:40.126 "rw_mbytes_per_sec": 0, 00:10:40.126 "r_mbytes_per_sec": 0, 00:10:40.126 "w_mbytes_per_sec": 0 00:10:40.126 }, 00:10:40.126 "claimed": false, 00:10:40.126 "zoned": false, 00:10:40.126 "supported_io_types": { 00:10:40.126 "read": true, 00:10:40.126 "write": true, 00:10:40.126 "unmap": true, 00:10:40.126 "flush": true, 00:10:40.126 "reset": true, 00:10:40.126 "nvme_admin": false, 00:10:40.126 "nvme_io": false, 00:10:40.126 "nvme_io_md": false, 00:10:40.126 "write_zeroes": true, 00:10:40.126 "zcopy": false, 00:10:40.126 "get_zone_info": false, 00:10:40.126 "zone_management": false, 00:10:40.126 "zone_append": false, 00:10:40.126 "compare": false, 00:10:40.126 "compare_and_write": false, 00:10:40.126 "abort": false, 00:10:40.126 "seek_hole": false, 00:10:40.126 "seek_data": false, 00:10:40.126 "copy": false, 00:10:40.126 "nvme_iov_md": false 00:10:40.126 }, 00:10:40.126 "memory_domains": [ 00:10:40.126 { 00:10:40.126 "dma_device_id": "system", 00:10:40.126 "dma_device_type": 1 00:10:40.126 }, 00:10:40.126 { 00:10:40.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.126 "dma_device_type": 2 00:10:40.126 }, 00:10:40.126 { 00:10:40.126 "dma_device_id": "system", 00:10:40.126 "dma_device_type": 1 00:10:40.126 }, 00:10:40.126 { 00:10:40.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.126 "dma_device_type": 2 00:10:40.126 } 00:10:40.126 ], 00:10:40.126 "driver_specific": { 00:10:40.126 "raid": { 00:10:40.126 "uuid": "246a9164-5070-4fb9-8cc7-fca20d2388d8", 00:10:40.126 "strip_size_kb": 64, 00:10:40.126 "state": "online", 00:10:40.126 "raid_level": "raid0", 00:10:40.126 "superblock": false, 00:10:40.126 "num_base_bdevs": 2, 00:10:40.126 "num_base_bdevs_discovered": 2, 00:10:40.126 "num_base_bdevs_operational": 2, 00:10:40.126 "base_bdevs_list": [ 00:10:40.126 { 00:10:40.126 "name": "BaseBdev1", 00:10:40.126 "uuid": "077025a3-a184-42ed-9240-057e714797ca", 00:10:40.126 "is_configured": true, 00:10:40.126 "data_offset": 0, 00:10:40.126 "data_size": 65536 00:10:40.126 }, 00:10:40.126 { 00:10:40.126 "name": "BaseBdev2", 00:10:40.126 "uuid": "68f5269f-ea6b-4892-bf1e-4c30200bbbb8", 00:10:40.126 "is_configured": true, 00:10:40.126 "data_offset": 0, 00:10:40.126 "data_size": 65536 00:10:40.126 } 00:10:40.126 ] 00:10:40.126 } 00:10:40.126 } 00:10:40.126 }' 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:40.126 BaseBdev2' 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.126 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.126 [2024-11-20 07:07:37.440832] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.126 [2024-11-20 07:07:37.440871] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.126 [2024-11-20 07:07:37.440982] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.384 "name": "Existed_Raid", 00:10:40.384 "uuid": "246a9164-5070-4fb9-8cc7-fca20d2388d8", 00:10:40.384 "strip_size_kb": 64, 00:10:40.384 "state": "offline", 00:10:40.384 "raid_level": "raid0", 00:10:40.384 "superblock": false, 00:10:40.384 "num_base_bdevs": 2, 00:10:40.384 "num_base_bdevs_discovered": 1, 00:10:40.384 "num_base_bdevs_operational": 1, 00:10:40.384 "base_bdevs_list": [ 00:10:40.384 { 00:10:40.384 "name": null, 00:10:40.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.384 "is_configured": false, 00:10:40.384 "data_offset": 0, 00:10:40.384 "data_size": 65536 00:10:40.384 }, 00:10:40.384 { 00:10:40.384 "name": "BaseBdev2", 00:10:40.384 "uuid": "68f5269f-ea6b-4892-bf1e-4c30200bbbb8", 00:10:40.384 "is_configured": true, 00:10:40.384 "data_offset": 0, 00:10:40.384 "data_size": 65536 00:10:40.384 } 00:10:40.384 ] 00:10:40.384 }' 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.384 07:07:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.948 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:40.948 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.948 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.948 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:40.948 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.948 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.948 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.948 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:40.948 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:40.948 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:40.948 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.948 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.948 [2024-11-20 07:07:38.142764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.949 [2024-11-20 07:07:38.142825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:40.949 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.949 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:40.949 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:40.949 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.949 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.949 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:40.949 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.949 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60564 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60564 ']' 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60564 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60564 00:10:41.206 killing process with pid 60564 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60564' 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60564 00:10:41.206 [2024-11-20 07:07:38.328490] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:41.206 07:07:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60564 00:10:41.206 [2024-11-20 07:07:38.344135] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.140 07:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:42.140 00:10:42.140 real 0m5.601s 00:10:42.140 user 0m8.449s 00:10:42.140 sys 0m0.808s 00:10:42.140 07:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.140 07:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.140 ************************************ 00:10:42.140 END TEST raid_state_function_test 00:10:42.140 ************************************ 00:10:42.398 07:07:39 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:10:42.398 07:07:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:42.398 07:07:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.398 07:07:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.398 ************************************ 00:10:42.398 START TEST raid_state_function_test_sb 00:10:42.398 ************************************ 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60821 00:10:42.398 Process raid pid: 60821 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60821' 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60821 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60821 ']' 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.398 07:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.398 [2024-11-20 07:07:39.598956] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:42.398 [2024-11-20 07:07:39.599130] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.684 [2024-11-20 07:07:39.780964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.684 [2024-11-20 07:07:39.921019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.973 [2024-11-20 07:07:40.130321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.973 [2024-11-20 07:07:40.130375] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.541 [2024-11-20 07:07:40.557181] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.541 [2024-11-20 07:07:40.557249] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.541 [2024-11-20 07:07:40.557266] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.541 [2024-11-20 07:07:40.557284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.541 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.541 "name": "Existed_Raid", 00:10:43.541 "uuid": "c1e4dee9-33e3-4b6a-89f9-81db33a2adf7", 00:10:43.541 "strip_size_kb": 64, 00:10:43.541 "state": "configuring", 00:10:43.541 "raid_level": "raid0", 00:10:43.541 "superblock": true, 00:10:43.541 "num_base_bdevs": 2, 00:10:43.541 "num_base_bdevs_discovered": 0, 00:10:43.541 "num_base_bdevs_operational": 2, 00:10:43.541 "base_bdevs_list": [ 00:10:43.541 { 00:10:43.541 "name": "BaseBdev1", 00:10:43.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.541 "is_configured": false, 00:10:43.541 "data_offset": 0, 00:10:43.542 "data_size": 0 00:10:43.542 }, 00:10:43.542 { 00:10:43.542 "name": "BaseBdev2", 00:10:43.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.542 "is_configured": false, 00:10:43.542 "data_offset": 0, 00:10:43.542 "data_size": 0 00:10:43.542 } 00:10:43.542 ] 00:10:43.542 }' 00:10:43.542 07:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.542 07:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.800 [2024-11-20 07:07:41.057238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.800 [2024-11-20 07:07:41.057286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.800 [2024-11-20 07:07:41.065231] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:43.800 [2024-11-20 07:07:41.065283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:43.800 [2024-11-20 07:07:41.065299] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:43.800 [2024-11-20 07:07:41.065317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.800 [2024-11-20 07:07:41.111028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.800 BaseBdev1 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.800 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.059 [ 00:10:44.059 { 00:10:44.059 "name": "BaseBdev1", 00:10:44.059 "aliases": [ 00:10:44.059 "5ff0fbcf-6940-4fc3-8db0-d6487f23b5af" 00:10:44.059 ], 00:10:44.059 "product_name": "Malloc disk", 00:10:44.059 "block_size": 512, 00:10:44.059 "num_blocks": 65536, 00:10:44.059 "uuid": "5ff0fbcf-6940-4fc3-8db0-d6487f23b5af", 00:10:44.059 "assigned_rate_limits": { 00:10:44.059 "rw_ios_per_sec": 0, 00:10:44.059 "rw_mbytes_per_sec": 0, 00:10:44.059 "r_mbytes_per_sec": 0, 00:10:44.059 "w_mbytes_per_sec": 0 00:10:44.059 }, 00:10:44.059 "claimed": true, 00:10:44.059 "claim_type": "exclusive_write", 00:10:44.059 "zoned": false, 00:10:44.059 "supported_io_types": { 00:10:44.059 "read": true, 00:10:44.059 "write": true, 00:10:44.059 "unmap": true, 00:10:44.059 "flush": true, 00:10:44.059 "reset": true, 00:10:44.059 "nvme_admin": false, 00:10:44.059 "nvme_io": false, 00:10:44.059 "nvme_io_md": false, 00:10:44.059 "write_zeroes": true, 00:10:44.059 "zcopy": true, 00:10:44.059 "get_zone_info": false, 00:10:44.059 "zone_management": false, 00:10:44.059 "zone_append": false, 00:10:44.059 "compare": false, 00:10:44.059 "compare_and_write": false, 00:10:44.059 "abort": true, 00:10:44.059 "seek_hole": false, 00:10:44.059 "seek_data": false, 00:10:44.059 "copy": true, 00:10:44.059 "nvme_iov_md": false 00:10:44.059 }, 00:10:44.059 "memory_domains": [ 00:10:44.059 { 00:10:44.059 "dma_device_id": "system", 00:10:44.059 "dma_device_type": 1 00:10:44.059 }, 00:10:44.059 { 00:10:44.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.059 "dma_device_type": 2 00:10:44.059 } 00:10:44.059 ], 00:10:44.059 "driver_specific": {} 00:10:44.059 } 00:10:44.059 ] 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.059 "name": "Existed_Raid", 00:10:44.059 "uuid": "65bbca1a-d687-46ea-a3e9-9f728078abd2", 00:10:44.059 "strip_size_kb": 64, 00:10:44.059 "state": "configuring", 00:10:44.059 "raid_level": "raid0", 00:10:44.059 "superblock": true, 00:10:44.059 "num_base_bdevs": 2, 00:10:44.059 "num_base_bdevs_discovered": 1, 00:10:44.059 "num_base_bdevs_operational": 2, 00:10:44.059 "base_bdevs_list": [ 00:10:44.059 { 00:10:44.059 "name": "BaseBdev1", 00:10:44.059 "uuid": "5ff0fbcf-6940-4fc3-8db0-d6487f23b5af", 00:10:44.059 "is_configured": true, 00:10:44.059 "data_offset": 2048, 00:10:44.059 "data_size": 63488 00:10:44.059 }, 00:10:44.059 { 00:10:44.059 "name": "BaseBdev2", 00:10:44.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.059 "is_configured": false, 00:10:44.059 "data_offset": 0, 00:10:44.059 "data_size": 0 00:10:44.059 } 00:10:44.059 ] 00:10:44.059 }' 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.059 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.627 [2024-11-20 07:07:41.663197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.627 [2024-11-20 07:07:41.663268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.627 [2024-11-20 07:07:41.671288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:44.627 [2024-11-20 07:07:41.673729] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.627 [2024-11-20 07:07:41.673793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.627 "name": "Existed_Raid", 00:10:44.627 "uuid": "fcc08cb8-644d-4388-a346-2e686e3cc974", 00:10:44.627 "strip_size_kb": 64, 00:10:44.627 "state": "configuring", 00:10:44.627 "raid_level": "raid0", 00:10:44.627 "superblock": true, 00:10:44.627 "num_base_bdevs": 2, 00:10:44.627 "num_base_bdevs_discovered": 1, 00:10:44.627 "num_base_bdevs_operational": 2, 00:10:44.627 "base_bdevs_list": [ 00:10:44.627 { 00:10:44.627 "name": "BaseBdev1", 00:10:44.627 "uuid": "5ff0fbcf-6940-4fc3-8db0-d6487f23b5af", 00:10:44.627 "is_configured": true, 00:10:44.627 "data_offset": 2048, 00:10:44.627 "data_size": 63488 00:10:44.627 }, 00:10:44.627 { 00:10:44.627 "name": "BaseBdev2", 00:10:44.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.627 "is_configured": false, 00:10:44.627 "data_offset": 0, 00:10:44.627 "data_size": 0 00:10:44.627 } 00:10:44.627 ] 00:10:44.627 }' 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.627 07:07:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.885 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.885 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.885 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.885 [2024-11-20 07:07:42.199456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.885 [2024-11-20 07:07:42.199787] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:44.885 [2024-11-20 07:07:42.199806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:44.885 [2024-11-20 07:07:42.200162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:44.885 BaseBdev2 00:10:44.885 [2024-11-20 07:07:42.200351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:44.885 [2024-11-20 07:07:42.200372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:44.885 [2024-11-20 07:07:42.200547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:44.885 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.885 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:44.885 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:44.885 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:44.885 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.143 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.143 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.143 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.143 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.143 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.143 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.143 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:45.143 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.143 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.143 [ 00:10:45.143 { 00:10:45.143 "name": "BaseBdev2", 00:10:45.143 "aliases": [ 00:10:45.143 "5b6c441a-c124-4c8d-8455-465bed406530" 00:10:45.143 ], 00:10:45.143 "product_name": "Malloc disk", 00:10:45.143 "block_size": 512, 00:10:45.143 "num_blocks": 65536, 00:10:45.144 "uuid": "5b6c441a-c124-4c8d-8455-465bed406530", 00:10:45.144 "assigned_rate_limits": { 00:10:45.144 "rw_ios_per_sec": 0, 00:10:45.144 "rw_mbytes_per_sec": 0, 00:10:45.144 "r_mbytes_per_sec": 0, 00:10:45.144 "w_mbytes_per_sec": 0 00:10:45.144 }, 00:10:45.144 "claimed": true, 00:10:45.144 "claim_type": "exclusive_write", 00:10:45.144 "zoned": false, 00:10:45.144 "supported_io_types": { 00:10:45.144 "read": true, 00:10:45.144 "write": true, 00:10:45.144 "unmap": true, 00:10:45.144 "flush": true, 00:10:45.144 "reset": true, 00:10:45.144 "nvme_admin": false, 00:10:45.144 "nvme_io": false, 00:10:45.144 "nvme_io_md": false, 00:10:45.144 "write_zeroes": true, 00:10:45.144 "zcopy": true, 00:10:45.144 "get_zone_info": false, 00:10:45.144 "zone_management": false, 00:10:45.144 "zone_append": false, 00:10:45.144 "compare": false, 00:10:45.144 "compare_and_write": false, 00:10:45.144 "abort": true, 00:10:45.144 "seek_hole": false, 00:10:45.144 "seek_data": false, 00:10:45.144 "copy": true, 00:10:45.144 "nvme_iov_md": false 00:10:45.144 }, 00:10:45.144 "memory_domains": [ 00:10:45.144 { 00:10:45.144 "dma_device_id": "system", 00:10:45.144 "dma_device_type": 1 00:10:45.144 }, 00:10:45.144 { 00:10:45.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.144 "dma_device_type": 2 00:10:45.144 } 00:10:45.144 ], 00:10:45.144 "driver_specific": {} 00:10:45.144 } 00:10:45.144 ] 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.144 "name": "Existed_Raid", 00:10:45.144 "uuid": "fcc08cb8-644d-4388-a346-2e686e3cc974", 00:10:45.144 "strip_size_kb": 64, 00:10:45.144 "state": "online", 00:10:45.144 "raid_level": "raid0", 00:10:45.144 "superblock": true, 00:10:45.144 "num_base_bdevs": 2, 00:10:45.144 "num_base_bdevs_discovered": 2, 00:10:45.144 "num_base_bdevs_operational": 2, 00:10:45.144 "base_bdevs_list": [ 00:10:45.144 { 00:10:45.144 "name": "BaseBdev1", 00:10:45.144 "uuid": "5ff0fbcf-6940-4fc3-8db0-d6487f23b5af", 00:10:45.144 "is_configured": true, 00:10:45.144 "data_offset": 2048, 00:10:45.144 "data_size": 63488 00:10:45.144 }, 00:10:45.144 { 00:10:45.144 "name": "BaseBdev2", 00:10:45.144 "uuid": "5b6c441a-c124-4c8d-8455-465bed406530", 00:10:45.144 "is_configured": true, 00:10:45.144 "data_offset": 2048, 00:10:45.144 "data_size": 63488 00:10:45.144 } 00:10:45.144 ] 00:10:45.144 }' 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.144 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.402 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:45.402 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:45.402 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.402 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.402 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.402 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.660 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:45.660 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.660 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.660 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.660 [2024-11-20 07:07:42.724071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.660 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.660 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.660 "name": "Existed_Raid", 00:10:45.660 "aliases": [ 00:10:45.660 "fcc08cb8-644d-4388-a346-2e686e3cc974" 00:10:45.660 ], 00:10:45.660 "product_name": "Raid Volume", 00:10:45.660 "block_size": 512, 00:10:45.660 "num_blocks": 126976, 00:10:45.660 "uuid": "fcc08cb8-644d-4388-a346-2e686e3cc974", 00:10:45.660 "assigned_rate_limits": { 00:10:45.660 "rw_ios_per_sec": 0, 00:10:45.660 "rw_mbytes_per_sec": 0, 00:10:45.660 "r_mbytes_per_sec": 0, 00:10:45.660 "w_mbytes_per_sec": 0 00:10:45.660 }, 00:10:45.660 "claimed": false, 00:10:45.660 "zoned": false, 00:10:45.660 "supported_io_types": { 00:10:45.660 "read": true, 00:10:45.660 "write": true, 00:10:45.660 "unmap": true, 00:10:45.660 "flush": true, 00:10:45.660 "reset": true, 00:10:45.661 "nvme_admin": false, 00:10:45.661 "nvme_io": false, 00:10:45.661 "nvme_io_md": false, 00:10:45.661 "write_zeroes": true, 00:10:45.661 "zcopy": false, 00:10:45.661 "get_zone_info": false, 00:10:45.661 "zone_management": false, 00:10:45.661 "zone_append": false, 00:10:45.661 "compare": false, 00:10:45.661 "compare_and_write": false, 00:10:45.661 "abort": false, 00:10:45.661 "seek_hole": false, 00:10:45.661 "seek_data": false, 00:10:45.661 "copy": false, 00:10:45.661 "nvme_iov_md": false 00:10:45.661 }, 00:10:45.661 "memory_domains": [ 00:10:45.661 { 00:10:45.661 "dma_device_id": "system", 00:10:45.661 "dma_device_type": 1 00:10:45.661 }, 00:10:45.661 { 00:10:45.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.661 "dma_device_type": 2 00:10:45.661 }, 00:10:45.661 { 00:10:45.661 "dma_device_id": "system", 00:10:45.661 "dma_device_type": 1 00:10:45.661 }, 00:10:45.661 { 00:10:45.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.661 "dma_device_type": 2 00:10:45.661 } 00:10:45.661 ], 00:10:45.661 "driver_specific": { 00:10:45.661 "raid": { 00:10:45.661 "uuid": "fcc08cb8-644d-4388-a346-2e686e3cc974", 00:10:45.661 "strip_size_kb": 64, 00:10:45.661 "state": "online", 00:10:45.661 "raid_level": "raid0", 00:10:45.661 "superblock": true, 00:10:45.661 "num_base_bdevs": 2, 00:10:45.661 "num_base_bdevs_discovered": 2, 00:10:45.661 "num_base_bdevs_operational": 2, 00:10:45.661 "base_bdevs_list": [ 00:10:45.661 { 00:10:45.661 "name": "BaseBdev1", 00:10:45.661 "uuid": "5ff0fbcf-6940-4fc3-8db0-d6487f23b5af", 00:10:45.661 "is_configured": true, 00:10:45.661 "data_offset": 2048, 00:10:45.661 "data_size": 63488 00:10:45.661 }, 00:10:45.661 { 00:10:45.661 "name": "BaseBdev2", 00:10:45.661 "uuid": "5b6c441a-c124-4c8d-8455-465bed406530", 00:10:45.661 "is_configured": true, 00:10:45.661 "data_offset": 2048, 00:10:45.661 "data_size": 63488 00:10:45.661 } 00:10:45.661 ] 00:10:45.661 } 00:10:45.661 } 00:10:45.661 }' 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:45.661 BaseBdev2' 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.661 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.920 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:45.920 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:45.920 07:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:45.920 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.920 07:07:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.920 [2024-11-20 07:07:42.987779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.920 [2024-11-20 07:07:42.987827] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:45.920 [2024-11-20 07:07:42.987909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.920 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.920 "name": "Existed_Raid", 00:10:45.920 "uuid": "fcc08cb8-644d-4388-a346-2e686e3cc974", 00:10:45.920 "strip_size_kb": 64, 00:10:45.920 "state": "offline", 00:10:45.920 "raid_level": "raid0", 00:10:45.920 "superblock": true, 00:10:45.920 "num_base_bdevs": 2, 00:10:45.920 "num_base_bdevs_discovered": 1, 00:10:45.920 "num_base_bdevs_operational": 1, 00:10:45.920 "base_bdevs_list": [ 00:10:45.920 { 00:10:45.920 "name": null, 00:10:45.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.920 "is_configured": false, 00:10:45.920 "data_offset": 0, 00:10:45.920 "data_size": 63488 00:10:45.920 }, 00:10:45.920 { 00:10:45.920 "name": "BaseBdev2", 00:10:45.920 "uuid": "5b6c441a-c124-4c8d-8455-465bed406530", 00:10:45.920 "is_configured": true, 00:10:45.920 "data_offset": 2048, 00:10:45.920 "data_size": 63488 00:10:45.920 } 00:10:45.921 ] 00:10:45.921 }' 00:10:45.921 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.921 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.487 [2024-11-20 07:07:43.641991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.487 [2024-11-20 07:07:43.642059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60821 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60821 ']' 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60821 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.487 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60821 00:10:46.746 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.746 killing process with pid 60821 00:10:46.746 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.746 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60821' 00:10:46.746 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60821 00:10:46.746 [2024-11-20 07:07:43.819449] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.746 07:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60821 00:10:46.746 [2024-11-20 07:07:43.834336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:47.680 07:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:47.680 00:10:47.680 real 0m5.448s 00:10:47.680 user 0m8.150s 00:10:47.680 sys 0m0.780s 00:10:47.680 07:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.680 07:07:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.680 ************************************ 00:10:47.680 END TEST raid_state_function_test_sb 00:10:47.680 ************************************ 00:10:47.680 07:07:44 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:10:47.680 07:07:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:47.681 07:07:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.681 07:07:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:47.681 ************************************ 00:10:47.681 START TEST raid_superblock_test 00:10:47.681 ************************************ 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61080 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61080 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61080 ']' 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:47.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:47.681 07:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.939 [2024-11-20 07:07:45.148705] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:47.939 [2024-11-20 07:07:45.148966] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61080 ] 00:10:48.198 [2024-11-20 07:07:45.338223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.198 [2024-11-20 07:07:45.477082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.457 [2024-11-20 07:07:45.694111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:48.457 [2024-11-20 07:07:45.694197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.024 malloc1 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.024 [2024-11-20 07:07:46.219861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:49.024 [2024-11-20 07:07:46.220082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.024 [2024-11-20 07:07:46.220160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:49.024 [2024-11-20 07:07:46.220357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.024 [2024-11-20 07:07:46.223148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.024 [2024-11-20 07:07:46.223321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:49.024 pt1 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.024 malloc2 00:10:49.024 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.025 [2024-11-20 07:07:46.276250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.025 [2024-11-20 07:07:46.276321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.025 [2024-11-20 07:07:46.276353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:49.025 [2024-11-20 07:07:46.276368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.025 [2024-11-20 07:07:46.279231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.025 [2024-11-20 07:07:46.279274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.025 pt2 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.025 [2024-11-20 07:07:46.284319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:49.025 [2024-11-20 07:07:46.286835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.025 [2024-11-20 07:07:46.287061] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:49.025 [2024-11-20 07:07:46.287080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:49.025 [2024-11-20 07:07:46.287408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:10:49.025 [2024-11-20 07:07:46.287608] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:49.025 [2024-11-20 07:07:46.287631] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:49.025 [2024-11-20 07:07:46.287821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.025 "name": "raid_bdev1", 00:10:49.025 "uuid": "78284f1e-5417-45e7-8424-3a7eab1a9de1", 00:10:49.025 "strip_size_kb": 64, 00:10:49.025 "state": "online", 00:10:49.025 "raid_level": "raid0", 00:10:49.025 "superblock": true, 00:10:49.025 "num_base_bdevs": 2, 00:10:49.025 "num_base_bdevs_discovered": 2, 00:10:49.025 "num_base_bdevs_operational": 2, 00:10:49.025 "base_bdevs_list": [ 00:10:49.025 { 00:10:49.025 "name": "pt1", 00:10:49.025 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.025 "is_configured": true, 00:10:49.025 "data_offset": 2048, 00:10:49.025 "data_size": 63488 00:10:49.025 }, 00:10:49.025 { 00:10:49.025 "name": "pt2", 00:10:49.025 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.025 "is_configured": true, 00:10:49.025 "data_offset": 2048, 00:10:49.025 "data_size": 63488 00:10:49.025 } 00:10:49.025 ] 00:10:49.025 }' 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.025 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.591 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:49.591 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:49.591 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.591 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.591 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.591 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.591 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.591 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.591 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.591 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.591 [2024-11-20 07:07:46.780798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.591 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.591 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.591 "name": "raid_bdev1", 00:10:49.591 "aliases": [ 00:10:49.591 "78284f1e-5417-45e7-8424-3a7eab1a9de1" 00:10:49.591 ], 00:10:49.591 "product_name": "Raid Volume", 00:10:49.591 "block_size": 512, 00:10:49.591 "num_blocks": 126976, 00:10:49.591 "uuid": "78284f1e-5417-45e7-8424-3a7eab1a9de1", 00:10:49.591 "assigned_rate_limits": { 00:10:49.591 "rw_ios_per_sec": 0, 00:10:49.591 "rw_mbytes_per_sec": 0, 00:10:49.591 "r_mbytes_per_sec": 0, 00:10:49.591 "w_mbytes_per_sec": 0 00:10:49.591 }, 00:10:49.591 "claimed": false, 00:10:49.591 "zoned": false, 00:10:49.591 "supported_io_types": { 00:10:49.591 "read": true, 00:10:49.591 "write": true, 00:10:49.591 "unmap": true, 00:10:49.591 "flush": true, 00:10:49.591 "reset": true, 00:10:49.591 "nvme_admin": false, 00:10:49.591 "nvme_io": false, 00:10:49.591 "nvme_io_md": false, 00:10:49.591 "write_zeroes": true, 00:10:49.591 "zcopy": false, 00:10:49.591 "get_zone_info": false, 00:10:49.591 "zone_management": false, 00:10:49.591 "zone_append": false, 00:10:49.591 "compare": false, 00:10:49.591 "compare_and_write": false, 00:10:49.591 "abort": false, 00:10:49.592 "seek_hole": false, 00:10:49.592 "seek_data": false, 00:10:49.592 "copy": false, 00:10:49.592 "nvme_iov_md": false 00:10:49.592 }, 00:10:49.592 "memory_domains": [ 00:10:49.592 { 00:10:49.592 "dma_device_id": "system", 00:10:49.592 "dma_device_type": 1 00:10:49.592 }, 00:10:49.592 { 00:10:49.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.592 "dma_device_type": 2 00:10:49.592 }, 00:10:49.592 { 00:10:49.592 "dma_device_id": "system", 00:10:49.592 "dma_device_type": 1 00:10:49.592 }, 00:10:49.592 { 00:10:49.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.592 "dma_device_type": 2 00:10:49.592 } 00:10:49.592 ], 00:10:49.592 "driver_specific": { 00:10:49.592 "raid": { 00:10:49.592 "uuid": "78284f1e-5417-45e7-8424-3a7eab1a9de1", 00:10:49.592 "strip_size_kb": 64, 00:10:49.592 "state": "online", 00:10:49.592 "raid_level": "raid0", 00:10:49.592 "superblock": true, 00:10:49.592 "num_base_bdevs": 2, 00:10:49.592 "num_base_bdevs_discovered": 2, 00:10:49.592 "num_base_bdevs_operational": 2, 00:10:49.592 "base_bdevs_list": [ 00:10:49.592 { 00:10:49.592 "name": "pt1", 00:10:49.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.592 "is_configured": true, 00:10:49.592 "data_offset": 2048, 00:10:49.592 "data_size": 63488 00:10:49.592 }, 00:10:49.592 { 00:10:49.592 "name": "pt2", 00:10:49.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.592 "is_configured": true, 00:10:49.592 "data_offset": 2048, 00:10:49.592 "data_size": 63488 00:10:49.592 } 00:10:49.592 ] 00:10:49.592 } 00:10:49.592 } 00:10:49.592 }' 00:10:49.592 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.592 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:49.592 pt2' 00:10:49.592 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.850 07:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.850 [2024-11-20 07:07:47.024813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=78284f1e-5417-45e7-8424-3a7eab1a9de1 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 78284f1e-5417-45e7-8424-3a7eab1a9de1 ']' 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.850 [2024-11-20 07:07:47.072485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:49.850 [2024-11-20 07:07:47.072518] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.850 [2024-11-20 07:07:47.072626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.850 [2024-11-20 07:07:47.072691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.850 [2024-11-20 07:07:47.072713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.850 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.851 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:49.851 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.851 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:49.851 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.110 [2024-11-20 07:07:47.196536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:50.110 [2024-11-20 07:07:47.199169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:50.110 [2024-11-20 07:07:47.199399] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:50.110 [2024-11-20 07:07:47.199485] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:50.110 [2024-11-20 07:07:47.199512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.110 [2024-11-20 07:07:47.199535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:50.110 request: 00:10:50.110 { 00:10:50.110 "name": "raid_bdev1", 00:10:50.110 "raid_level": "raid0", 00:10:50.110 "base_bdevs": [ 00:10:50.110 "malloc1", 00:10:50.110 "malloc2" 00:10:50.110 ], 00:10:50.110 "strip_size_kb": 64, 00:10:50.110 "superblock": false, 00:10:50.110 "method": "bdev_raid_create", 00:10:50.110 "req_id": 1 00:10:50.110 } 00:10:50.110 Got JSON-RPC error response 00:10:50.110 response: 00:10:50.110 { 00:10:50.110 "code": -17, 00:10:50.110 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:50.110 } 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.110 [2024-11-20 07:07:47.256534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:50.110 [2024-11-20 07:07:47.256738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.110 [2024-11-20 07:07:47.256810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:50.110 [2024-11-20 07:07:47.257007] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.110 [2024-11-20 07:07:47.259955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.110 [2024-11-20 07:07:47.260114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:50.110 [2024-11-20 07:07:47.260309] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:50.110 [2024-11-20 07:07:47.260482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:50.110 pt1 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.110 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.110 "name": "raid_bdev1", 00:10:50.110 "uuid": "78284f1e-5417-45e7-8424-3a7eab1a9de1", 00:10:50.110 "strip_size_kb": 64, 00:10:50.110 "state": "configuring", 00:10:50.110 "raid_level": "raid0", 00:10:50.110 "superblock": true, 00:10:50.110 "num_base_bdevs": 2, 00:10:50.110 "num_base_bdevs_discovered": 1, 00:10:50.110 "num_base_bdevs_operational": 2, 00:10:50.110 "base_bdevs_list": [ 00:10:50.110 { 00:10:50.110 "name": "pt1", 00:10:50.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.110 "is_configured": true, 00:10:50.110 "data_offset": 2048, 00:10:50.110 "data_size": 63488 00:10:50.110 }, 00:10:50.110 { 00:10:50.110 "name": null, 00:10:50.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.111 "is_configured": false, 00:10:50.111 "data_offset": 2048, 00:10:50.111 "data_size": 63488 00:10:50.111 } 00:10:50.111 ] 00:10:50.111 }' 00:10:50.111 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.111 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.677 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:10:50.677 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:50.677 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:50.677 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:50.677 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.677 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.677 [2024-11-20 07:07:47.777044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:50.677 [2024-11-20 07:07:47.777135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.677 [2024-11-20 07:07:47.777166] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:10:50.677 [2024-11-20 07:07:47.777184] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.677 [2024-11-20 07:07:47.777757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.677 [2024-11-20 07:07:47.777788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:50.677 [2024-11-20 07:07:47.777907] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:50.677 [2024-11-20 07:07:47.777945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:50.677 [2024-11-20 07:07:47.778082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:50.677 [2024-11-20 07:07:47.778103] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:50.677 [2024-11-20 07:07:47.778408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:50.677 [2024-11-20 07:07:47.778603] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:50.677 [2024-11-20 07:07:47.778618] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:50.677 [2024-11-20 07:07:47.778789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.677 pt2 00:10:50.677 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.677 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:50.677 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:50.677 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:50.677 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.678 "name": "raid_bdev1", 00:10:50.678 "uuid": "78284f1e-5417-45e7-8424-3a7eab1a9de1", 00:10:50.678 "strip_size_kb": 64, 00:10:50.678 "state": "online", 00:10:50.678 "raid_level": "raid0", 00:10:50.678 "superblock": true, 00:10:50.678 "num_base_bdevs": 2, 00:10:50.678 "num_base_bdevs_discovered": 2, 00:10:50.678 "num_base_bdevs_operational": 2, 00:10:50.678 "base_bdevs_list": [ 00:10:50.678 { 00:10:50.678 "name": "pt1", 00:10:50.678 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.678 "is_configured": true, 00:10:50.678 "data_offset": 2048, 00:10:50.678 "data_size": 63488 00:10:50.678 }, 00:10:50.678 { 00:10:50.678 "name": "pt2", 00:10:50.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.678 "is_configured": true, 00:10:50.678 "data_offset": 2048, 00:10:50.678 "data_size": 63488 00:10:50.678 } 00:10:50.678 ] 00:10:50.678 }' 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.678 07:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.245 [2024-11-20 07:07:48.309615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:51.245 "name": "raid_bdev1", 00:10:51.245 "aliases": [ 00:10:51.245 "78284f1e-5417-45e7-8424-3a7eab1a9de1" 00:10:51.245 ], 00:10:51.245 "product_name": "Raid Volume", 00:10:51.245 "block_size": 512, 00:10:51.245 "num_blocks": 126976, 00:10:51.245 "uuid": "78284f1e-5417-45e7-8424-3a7eab1a9de1", 00:10:51.245 "assigned_rate_limits": { 00:10:51.245 "rw_ios_per_sec": 0, 00:10:51.245 "rw_mbytes_per_sec": 0, 00:10:51.245 "r_mbytes_per_sec": 0, 00:10:51.245 "w_mbytes_per_sec": 0 00:10:51.245 }, 00:10:51.245 "claimed": false, 00:10:51.245 "zoned": false, 00:10:51.245 "supported_io_types": { 00:10:51.245 "read": true, 00:10:51.245 "write": true, 00:10:51.245 "unmap": true, 00:10:51.245 "flush": true, 00:10:51.245 "reset": true, 00:10:51.245 "nvme_admin": false, 00:10:51.245 "nvme_io": false, 00:10:51.245 "nvme_io_md": false, 00:10:51.245 "write_zeroes": true, 00:10:51.245 "zcopy": false, 00:10:51.245 "get_zone_info": false, 00:10:51.245 "zone_management": false, 00:10:51.245 "zone_append": false, 00:10:51.245 "compare": false, 00:10:51.245 "compare_and_write": false, 00:10:51.245 "abort": false, 00:10:51.245 "seek_hole": false, 00:10:51.245 "seek_data": false, 00:10:51.245 "copy": false, 00:10:51.245 "nvme_iov_md": false 00:10:51.245 }, 00:10:51.245 "memory_domains": [ 00:10:51.245 { 00:10:51.245 "dma_device_id": "system", 00:10:51.245 "dma_device_type": 1 00:10:51.245 }, 00:10:51.245 { 00:10:51.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.245 "dma_device_type": 2 00:10:51.245 }, 00:10:51.245 { 00:10:51.245 "dma_device_id": "system", 00:10:51.245 "dma_device_type": 1 00:10:51.245 }, 00:10:51.245 { 00:10:51.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.245 "dma_device_type": 2 00:10:51.245 } 00:10:51.245 ], 00:10:51.245 "driver_specific": { 00:10:51.245 "raid": { 00:10:51.245 "uuid": "78284f1e-5417-45e7-8424-3a7eab1a9de1", 00:10:51.245 "strip_size_kb": 64, 00:10:51.245 "state": "online", 00:10:51.245 "raid_level": "raid0", 00:10:51.245 "superblock": true, 00:10:51.245 "num_base_bdevs": 2, 00:10:51.245 "num_base_bdevs_discovered": 2, 00:10:51.245 "num_base_bdevs_operational": 2, 00:10:51.245 "base_bdevs_list": [ 00:10:51.245 { 00:10:51.245 "name": "pt1", 00:10:51.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.245 "is_configured": true, 00:10:51.245 "data_offset": 2048, 00:10:51.245 "data_size": 63488 00:10:51.245 }, 00:10:51.245 { 00:10:51.245 "name": "pt2", 00:10:51.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.245 "is_configured": true, 00:10:51.245 "data_offset": 2048, 00:10:51.245 "data_size": 63488 00:10:51.245 } 00:10:51.245 ] 00:10:51.245 } 00:10:51.245 } 00:10:51.245 }' 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:51.245 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:51.245 pt2' 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.246 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.504 [2024-11-20 07:07:48.569680] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 78284f1e-5417-45e7-8424-3a7eab1a9de1 '!=' 78284f1e-5417-45e7-8424-3a7eab1a9de1 ']' 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61080 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61080 ']' 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61080 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61080 00:10:51.504 killing process with pid 61080 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61080' 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61080 00:10:51.504 [2024-11-20 07:07:48.651690] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:51.504 07:07:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61080 00:10:51.504 [2024-11-20 07:07:48.651809] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.505 [2024-11-20 07:07:48.651886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.505 [2024-11-20 07:07:48.651922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:51.763 [2024-11-20 07:07:48.836740] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:52.698 07:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:52.698 00:10:52.698 real 0m4.891s 00:10:52.698 user 0m7.159s 00:10:52.698 sys 0m0.773s 00:10:52.698 07:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.698 07:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.698 ************************************ 00:10:52.698 END TEST raid_superblock_test 00:10:52.698 ************************************ 00:10:52.698 07:07:49 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:10:52.698 07:07:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:52.698 07:07:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.698 07:07:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:52.698 ************************************ 00:10:52.698 START TEST raid_read_error_test 00:10:52.698 ************************************ 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Sy1MMoA5p3 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61292 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61292 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61292 ']' 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.698 07:07:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.956 [2024-11-20 07:07:50.054785] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:52.956 [2024-11-20 07:07:50.055708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61292 ] 00:10:52.957 [2024-11-20 07:07:50.265950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.214 [2024-11-20 07:07:50.394647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.472 [2024-11-20 07:07:50.589139] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.472 [2024-11-20 07:07:50.589213] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.731 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.731 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:53.731 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.731 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:53.731 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.731 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.731 BaseBdev1_malloc 00:10:53.731 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.731 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:53.731 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.731 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.991 true 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.991 [2024-11-20 07:07:51.056498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:53.991 [2024-11-20 07:07:51.056584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.991 [2024-11-20 07:07:51.056614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:53.991 [2024-11-20 07:07:51.056631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.991 [2024-11-20 07:07:51.059608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.991 [2024-11-20 07:07:51.059661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:53.991 BaseBdev1 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.991 BaseBdev2_malloc 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.991 true 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.991 [2024-11-20 07:07:51.113383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:53.991 [2024-11-20 07:07:51.113453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:53.991 [2024-11-20 07:07:51.113479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:53.991 [2024-11-20 07:07:51.113496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:53.991 [2024-11-20 07:07:51.116275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:53.991 [2024-11-20 07:07:51.116327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:53.991 BaseBdev2 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.991 [2024-11-20 07:07:51.121464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:53.991 [2024-11-20 07:07:51.123940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:53.991 [2024-11-20 07:07:51.124226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:53.991 [2024-11-20 07:07:51.124264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:53.991 [2024-11-20 07:07:51.124577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:53.991 [2024-11-20 07:07:51.124806] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:53.991 [2024-11-20 07:07:51.124825] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:53.991 [2024-11-20 07:07:51.125042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.991 "name": "raid_bdev1", 00:10:53.991 "uuid": "bc17559f-8b96-4f6d-9684-e9c288ad9b48", 00:10:53.991 "strip_size_kb": 64, 00:10:53.991 "state": "online", 00:10:53.991 "raid_level": "raid0", 00:10:53.991 "superblock": true, 00:10:53.991 "num_base_bdevs": 2, 00:10:53.991 "num_base_bdevs_discovered": 2, 00:10:53.991 "num_base_bdevs_operational": 2, 00:10:53.991 "base_bdevs_list": [ 00:10:53.991 { 00:10:53.991 "name": "BaseBdev1", 00:10:53.991 "uuid": "cfb82a72-845d-508b-91ae-8ee83fbcd12e", 00:10:53.991 "is_configured": true, 00:10:53.991 "data_offset": 2048, 00:10:53.991 "data_size": 63488 00:10:53.991 }, 00:10:53.991 { 00:10:53.991 "name": "BaseBdev2", 00:10:53.991 "uuid": "c11e5c8a-740e-5080-b331-2c4894ea1832", 00:10:53.991 "is_configured": true, 00:10:53.991 "data_offset": 2048, 00:10:53.991 "data_size": 63488 00:10:53.991 } 00:10:53.991 ] 00:10:53.991 }' 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.991 07:07:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.557 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:54.557 07:07:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:54.557 [2024-11-20 07:07:51.763039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.496 "name": "raid_bdev1", 00:10:55.496 "uuid": "bc17559f-8b96-4f6d-9684-e9c288ad9b48", 00:10:55.496 "strip_size_kb": 64, 00:10:55.496 "state": "online", 00:10:55.496 "raid_level": "raid0", 00:10:55.496 "superblock": true, 00:10:55.496 "num_base_bdevs": 2, 00:10:55.496 "num_base_bdevs_discovered": 2, 00:10:55.496 "num_base_bdevs_operational": 2, 00:10:55.496 "base_bdevs_list": [ 00:10:55.496 { 00:10:55.496 "name": "BaseBdev1", 00:10:55.496 "uuid": "cfb82a72-845d-508b-91ae-8ee83fbcd12e", 00:10:55.496 "is_configured": true, 00:10:55.496 "data_offset": 2048, 00:10:55.496 "data_size": 63488 00:10:55.496 }, 00:10:55.496 { 00:10:55.496 "name": "BaseBdev2", 00:10:55.496 "uuid": "c11e5c8a-740e-5080-b331-2c4894ea1832", 00:10:55.496 "is_configured": true, 00:10:55.496 "data_offset": 2048, 00:10:55.496 "data_size": 63488 00:10:55.496 } 00:10:55.496 ] 00:10:55.496 }' 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.496 07:07:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.063 07:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:56.063 07:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.064 07:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.064 [2024-11-20 07:07:53.194538] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:56.064 [2024-11-20 07:07:53.194742] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.064 [2024-11-20 07:07:53.198294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.064 [2024-11-20 07:07:53.198348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.064 [2024-11-20 07:07:53.198393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.064 [2024-11-20 07:07:53.198410] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:56.064 { 00:10:56.064 "results": [ 00:10:56.064 { 00:10:56.064 "job": "raid_bdev1", 00:10:56.064 "core_mask": "0x1", 00:10:56.064 "workload": "randrw", 00:10:56.064 "percentage": 50, 00:10:56.064 "status": "finished", 00:10:56.064 "queue_depth": 1, 00:10:56.064 "io_size": 131072, 00:10:56.064 "runtime": 1.428944, 00:10:56.064 "iops": 10812.180183408167, 00:10:56.064 "mibps": 1351.5225229260209, 00:10:56.064 "io_failed": 1, 00:10:56.064 "io_timeout": 0, 00:10:56.064 "avg_latency_us": 129.18568754008274, 00:10:56.064 "min_latency_us": 39.09818181818182, 00:10:56.064 "max_latency_us": 1876.7127272727273 00:10:56.064 } 00:10:56.064 ], 00:10:56.064 "core_count": 1 00:10:56.064 } 00:10:56.064 07:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.064 07:07:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61292 00:10:56.064 07:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61292 ']' 00:10:56.064 07:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61292 00:10:56.064 07:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:56.064 07:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:56.064 07:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61292 00:10:56.064 killing process with pid 61292 00:10:56.064 07:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:56.064 07:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:56.064 07:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61292' 00:10:56.064 07:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61292 00:10:56.064 [2024-11-20 07:07:53.232700] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.064 07:07:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61292 00:10:56.064 [2024-11-20 07:07:53.353452] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.438 07:07:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Sy1MMoA5p3 00:10:57.438 07:07:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:57.438 07:07:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:57.438 ************************************ 00:10:57.438 END TEST raid_read_error_test 00:10:57.438 ************************************ 00:10:57.438 07:07:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:57.438 07:07:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:57.438 07:07:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:57.438 07:07:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:57.438 07:07:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:57.438 00:10:57.438 real 0m4.517s 00:10:57.438 user 0m5.638s 00:10:57.438 sys 0m0.575s 00:10:57.438 07:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.438 07:07:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.438 07:07:54 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:10:57.438 07:07:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:57.438 07:07:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.438 07:07:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.438 ************************************ 00:10:57.438 START TEST raid_write_error_test 00:10:57.438 ************************************ 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EoiGmQVIRC 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61437 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61437 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61437 ']' 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.438 07:07:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.438 [2024-11-20 07:07:54.613826] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:10:57.438 [2024-11-20 07:07:54.614027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61437 ] 00:10:57.696 [2024-11-20 07:07:54.805573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.696 [2024-11-20 07:07:55.011170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.956 [2024-11-20 07:07:55.216306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.956 [2024-11-20 07:07:55.216626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.524 BaseBdev1_malloc 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.524 true 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.524 [2024-11-20 07:07:55.648740] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:58.524 [2024-11-20 07:07:55.648817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.524 [2024-11-20 07:07:55.648847] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:58.524 [2024-11-20 07:07:55.648888] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.524 [2024-11-20 07:07:55.651867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.524 [2024-11-20 07:07:55.651936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:58.524 BaseBdev1 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.524 BaseBdev2_malloc 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.524 true 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.524 [2024-11-20 07:07:55.711767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:58.524 [2024-11-20 07:07:55.711920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.524 [2024-11-20 07:07:55.711955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:58.524 [2024-11-20 07:07:55.711974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.524 [2024-11-20 07:07:55.714959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.524 [2024-11-20 07:07:55.715009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:58.524 BaseBdev2 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.524 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.524 [2024-11-20 07:07:55.719950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.524 [2024-11-20 07:07:55.722701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.524 [2024-11-20 07:07:55.723126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:58.524 [2024-11-20 07:07:55.723309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:58.524 [2024-11-20 07:07:55.723692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:58.525 [2024-11-20 07:07:55.724116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:58.525 [2024-11-20 07:07:55.724262] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:58.525 [2024-11-20 07:07:55.724669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.525 "name": "raid_bdev1", 00:10:58.525 "uuid": "e5db5dea-7e7f-43c8-916d-dfc4bb94b390", 00:10:58.525 "strip_size_kb": 64, 00:10:58.525 "state": "online", 00:10:58.525 "raid_level": "raid0", 00:10:58.525 "superblock": true, 00:10:58.525 "num_base_bdevs": 2, 00:10:58.525 "num_base_bdevs_discovered": 2, 00:10:58.525 "num_base_bdevs_operational": 2, 00:10:58.525 "base_bdevs_list": [ 00:10:58.525 { 00:10:58.525 "name": "BaseBdev1", 00:10:58.525 "uuid": "9a5d8968-9357-5937-966e-fd9195fc29d7", 00:10:58.525 "is_configured": true, 00:10:58.525 "data_offset": 2048, 00:10:58.525 "data_size": 63488 00:10:58.525 }, 00:10:58.525 { 00:10:58.525 "name": "BaseBdev2", 00:10:58.525 "uuid": "ff43e04f-d079-5175-bd16-1914cc05cb89", 00:10:58.525 "is_configured": true, 00:10:58.525 "data_offset": 2048, 00:10:58.525 "data_size": 63488 00:10:58.525 } 00:10:58.525 ] 00:10:58.525 }' 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.525 07:07:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.091 07:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:59.091 07:07:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:59.091 [2024-11-20 07:07:56.342313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.027 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.028 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.028 "name": "raid_bdev1", 00:11:00.028 "uuid": "e5db5dea-7e7f-43c8-916d-dfc4bb94b390", 00:11:00.028 "strip_size_kb": 64, 00:11:00.028 "state": "online", 00:11:00.028 "raid_level": "raid0", 00:11:00.028 "superblock": true, 00:11:00.028 "num_base_bdevs": 2, 00:11:00.028 "num_base_bdevs_discovered": 2, 00:11:00.028 "num_base_bdevs_operational": 2, 00:11:00.028 "base_bdevs_list": [ 00:11:00.028 { 00:11:00.028 "name": "BaseBdev1", 00:11:00.028 "uuid": "9a5d8968-9357-5937-966e-fd9195fc29d7", 00:11:00.028 "is_configured": true, 00:11:00.028 "data_offset": 2048, 00:11:00.028 "data_size": 63488 00:11:00.028 }, 00:11:00.028 { 00:11:00.028 "name": "BaseBdev2", 00:11:00.028 "uuid": "ff43e04f-d079-5175-bd16-1914cc05cb89", 00:11:00.028 "is_configured": true, 00:11:00.028 "data_offset": 2048, 00:11:00.028 "data_size": 63488 00:11:00.028 } 00:11:00.028 ] 00:11:00.028 }' 00:11:00.028 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.028 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.594 [2024-11-20 07:07:57.732371] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:00.594 [2024-11-20 07:07:57.732553] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.594 [2024-11-20 07:07:57.736034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.594 [2024-11-20 07:07:57.736220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.594 [2024-11-20 07:07:57.736318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:11:00.594 "results": [ 00:11:00.594 { 00:11:00.594 "job": "raid_bdev1", 00:11:00.594 "core_mask": "0x1", 00:11:00.594 "workload": "randrw", 00:11:00.594 "percentage": 50, 00:11:00.594 "status": "finished", 00:11:00.594 "queue_depth": 1, 00:11:00.594 "io_size": 131072, 00:11:00.594 "runtime": 1.387862, 00:11:00.594 "iops": 11028.474012545916, 00:11:00.594 "mibps": 1378.5592515682395, 00:11:00.594 "io_failed": 1, 00:11:00.594 "io_timeout": 0, 00:11:00.594 "avg_latency_us": 126.73946109029141, 00:11:00.594 "min_latency_us": 41.89090909090909, 00:11:00.594 "max_latency_us": 1846.9236363636364 00:11:00.594 } 00:11:00.594 ], 00:11:00.594 "core_count": 1 00:11:00.594 } 00:11:00.594 ee all in destruct 00:11:00.594 [2024-11-20 07:07:57.736483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61437 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61437 ']' 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61437 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61437 00:11:00.594 killing process with pid 61437 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61437' 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61437 00:11:00.594 [2024-11-20 07:07:57.772544] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.594 07:07:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61437 00:11:00.594 [2024-11-20 07:07:57.900221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.036 07:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EoiGmQVIRC 00:11:02.036 07:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:02.036 07:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:02.037 07:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:02.037 07:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:02.037 07:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:02.037 07:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:02.037 07:07:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:02.037 00:11:02.037 real 0m4.522s 00:11:02.037 user 0m5.655s 00:11:02.037 sys 0m0.537s 00:11:02.037 ************************************ 00:11:02.037 END TEST raid_write_error_test 00:11:02.037 ************************************ 00:11:02.037 07:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.037 07:07:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.037 07:07:59 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:02.037 07:07:59 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:11:02.037 07:07:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:02.038 07:07:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.038 07:07:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.038 ************************************ 00:11:02.038 START TEST raid_state_function_test 00:11:02.038 ************************************ 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:02.038 Process raid pid: 61581 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:02.038 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:02.039 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61581 00:11:02.039 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61581' 00:11:02.039 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61581 00:11:02.040 07:07:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:02.040 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61581 ']' 00:11:02.040 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.040 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.040 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.040 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.040 07:07:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.040 [2024-11-20 07:07:59.186173] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:11:02.040 [2024-11-20 07:07:59.186362] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.307 [2024-11-20 07:07:59.378834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.307 [2024-11-20 07:07:59.540242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.564 [2024-11-20 07:07:59.770842] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:02.564 [2024-11-20 07:07:59.770908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.131 [2024-11-20 07:08:00.211161] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:03.131 [2024-11-20 07:08:00.211237] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:03.131 [2024-11-20 07:08:00.211265] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.131 [2024-11-20 07:08:00.211281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.131 "name": "Existed_Raid", 00:11:03.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.131 "strip_size_kb": 64, 00:11:03.131 "state": "configuring", 00:11:03.131 "raid_level": "concat", 00:11:03.131 "superblock": false, 00:11:03.131 "num_base_bdevs": 2, 00:11:03.131 "num_base_bdevs_discovered": 0, 00:11:03.131 "num_base_bdevs_operational": 2, 00:11:03.131 "base_bdevs_list": [ 00:11:03.131 { 00:11:03.131 "name": "BaseBdev1", 00:11:03.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.131 "is_configured": false, 00:11:03.131 "data_offset": 0, 00:11:03.131 "data_size": 0 00:11:03.131 }, 00:11:03.131 { 00:11:03.131 "name": "BaseBdev2", 00:11:03.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.131 "is_configured": false, 00:11:03.131 "data_offset": 0, 00:11:03.131 "data_size": 0 00:11:03.131 } 00:11:03.131 ] 00:11:03.131 }' 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.131 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.698 [2024-11-20 07:08:00.723283] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:03.698 [2024-11-20 07:08:00.723466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.698 [2024-11-20 07:08:00.731261] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:03.698 [2024-11-20 07:08:00.731320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:03.698 [2024-11-20 07:08:00.731335] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:03.698 [2024-11-20 07:08:00.731355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.698 [2024-11-20 07:08:00.779836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.698 BaseBdev1 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.698 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.698 [ 00:11:03.698 { 00:11:03.698 "name": "BaseBdev1", 00:11:03.698 "aliases": [ 00:11:03.698 "48cb8b0b-47d0-4304-a012-f3130c111957" 00:11:03.698 ], 00:11:03.698 "product_name": "Malloc disk", 00:11:03.698 "block_size": 512, 00:11:03.698 "num_blocks": 65536, 00:11:03.698 "uuid": "48cb8b0b-47d0-4304-a012-f3130c111957", 00:11:03.698 "assigned_rate_limits": { 00:11:03.698 "rw_ios_per_sec": 0, 00:11:03.698 "rw_mbytes_per_sec": 0, 00:11:03.698 "r_mbytes_per_sec": 0, 00:11:03.698 "w_mbytes_per_sec": 0 00:11:03.698 }, 00:11:03.698 "claimed": true, 00:11:03.698 "claim_type": "exclusive_write", 00:11:03.698 "zoned": false, 00:11:03.698 "supported_io_types": { 00:11:03.698 "read": true, 00:11:03.698 "write": true, 00:11:03.698 "unmap": true, 00:11:03.698 "flush": true, 00:11:03.698 "reset": true, 00:11:03.698 "nvme_admin": false, 00:11:03.698 "nvme_io": false, 00:11:03.698 "nvme_io_md": false, 00:11:03.698 "write_zeroes": true, 00:11:03.698 "zcopy": true, 00:11:03.698 "get_zone_info": false, 00:11:03.698 "zone_management": false, 00:11:03.698 "zone_append": false, 00:11:03.698 "compare": false, 00:11:03.698 "compare_and_write": false, 00:11:03.698 "abort": true, 00:11:03.698 "seek_hole": false, 00:11:03.698 "seek_data": false, 00:11:03.698 "copy": true, 00:11:03.698 "nvme_iov_md": false 00:11:03.698 }, 00:11:03.698 "memory_domains": [ 00:11:03.698 { 00:11:03.698 "dma_device_id": "system", 00:11:03.698 "dma_device_type": 1 00:11:03.698 }, 00:11:03.698 { 00:11:03.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.698 "dma_device_type": 2 00:11:03.698 } 00:11:03.698 ], 00:11:03.698 "driver_specific": {} 00:11:03.698 } 00:11:03.698 ] 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.699 "name": "Existed_Raid", 00:11:03.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.699 "strip_size_kb": 64, 00:11:03.699 "state": "configuring", 00:11:03.699 "raid_level": "concat", 00:11:03.699 "superblock": false, 00:11:03.699 "num_base_bdevs": 2, 00:11:03.699 "num_base_bdevs_discovered": 1, 00:11:03.699 "num_base_bdevs_operational": 2, 00:11:03.699 "base_bdevs_list": [ 00:11:03.699 { 00:11:03.699 "name": "BaseBdev1", 00:11:03.699 "uuid": "48cb8b0b-47d0-4304-a012-f3130c111957", 00:11:03.699 "is_configured": true, 00:11:03.699 "data_offset": 0, 00:11:03.699 "data_size": 65536 00:11:03.699 }, 00:11:03.699 { 00:11:03.699 "name": "BaseBdev2", 00:11:03.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.699 "is_configured": false, 00:11:03.699 "data_offset": 0, 00:11:03.699 "data_size": 0 00:11:03.699 } 00:11:03.699 ] 00:11:03.699 }' 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.699 07:08:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.265 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:04.265 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.265 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.265 [2024-11-20 07:08:01.324061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.266 [2024-11-20 07:08:01.324121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.266 [2024-11-20 07:08:01.336125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.266 [2024-11-20 07:08:01.338922] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.266 [2024-11-20 07:08:01.338989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.266 "name": "Existed_Raid", 00:11:04.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.266 "strip_size_kb": 64, 00:11:04.266 "state": "configuring", 00:11:04.266 "raid_level": "concat", 00:11:04.266 "superblock": false, 00:11:04.266 "num_base_bdevs": 2, 00:11:04.266 "num_base_bdevs_discovered": 1, 00:11:04.266 "num_base_bdevs_operational": 2, 00:11:04.266 "base_bdevs_list": [ 00:11:04.266 { 00:11:04.266 "name": "BaseBdev1", 00:11:04.266 "uuid": "48cb8b0b-47d0-4304-a012-f3130c111957", 00:11:04.266 "is_configured": true, 00:11:04.266 "data_offset": 0, 00:11:04.266 "data_size": 65536 00:11:04.266 }, 00:11:04.266 { 00:11:04.266 "name": "BaseBdev2", 00:11:04.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.266 "is_configured": false, 00:11:04.266 "data_offset": 0, 00:11:04.266 "data_size": 0 00:11:04.266 } 00:11:04.266 ] 00:11:04.266 }' 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.266 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.833 [2024-11-20 07:08:01.908364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.833 [2024-11-20 07:08:01.908433] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:04.833 [2024-11-20 07:08:01.908445] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:11:04.833 [2024-11-20 07:08:01.908860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:04.833 [2024-11-20 07:08:01.909142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:04.833 [2024-11-20 07:08:01.909166] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:04.833 [2024-11-20 07:08:01.909487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.833 BaseBdev2 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.833 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.833 [ 00:11:04.833 { 00:11:04.833 "name": "BaseBdev2", 00:11:04.833 "aliases": [ 00:11:04.833 "b4e90824-b84b-4727-b821-b772c1bb6d57" 00:11:04.833 ], 00:11:04.833 "product_name": "Malloc disk", 00:11:04.833 "block_size": 512, 00:11:04.833 "num_blocks": 65536, 00:11:04.833 "uuid": "b4e90824-b84b-4727-b821-b772c1bb6d57", 00:11:04.833 "assigned_rate_limits": { 00:11:04.833 "rw_ios_per_sec": 0, 00:11:04.833 "rw_mbytes_per_sec": 0, 00:11:04.833 "r_mbytes_per_sec": 0, 00:11:04.833 "w_mbytes_per_sec": 0 00:11:04.833 }, 00:11:04.833 "claimed": true, 00:11:04.834 "claim_type": "exclusive_write", 00:11:04.834 "zoned": false, 00:11:04.834 "supported_io_types": { 00:11:04.834 "read": true, 00:11:04.834 "write": true, 00:11:04.834 "unmap": true, 00:11:04.834 "flush": true, 00:11:04.834 "reset": true, 00:11:04.834 "nvme_admin": false, 00:11:04.834 "nvme_io": false, 00:11:04.834 "nvme_io_md": false, 00:11:04.834 "write_zeroes": true, 00:11:04.834 "zcopy": true, 00:11:04.834 "get_zone_info": false, 00:11:04.834 "zone_management": false, 00:11:04.834 "zone_append": false, 00:11:04.834 "compare": false, 00:11:04.834 "compare_and_write": false, 00:11:04.834 "abort": true, 00:11:04.834 "seek_hole": false, 00:11:04.834 "seek_data": false, 00:11:04.834 "copy": true, 00:11:04.834 "nvme_iov_md": false 00:11:04.834 }, 00:11:04.834 "memory_domains": [ 00:11:04.834 { 00:11:04.834 "dma_device_id": "system", 00:11:04.834 "dma_device_type": 1 00:11:04.834 }, 00:11:04.834 { 00:11:04.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.834 "dma_device_type": 2 00:11:04.834 } 00:11:04.834 ], 00:11:04.834 "driver_specific": {} 00:11:04.834 } 00:11:04.834 ] 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.834 "name": "Existed_Raid", 00:11:04.834 "uuid": "adea4b09-1df3-4a49-bc2a-cf223d128b76", 00:11:04.834 "strip_size_kb": 64, 00:11:04.834 "state": "online", 00:11:04.834 "raid_level": "concat", 00:11:04.834 "superblock": false, 00:11:04.834 "num_base_bdevs": 2, 00:11:04.834 "num_base_bdevs_discovered": 2, 00:11:04.834 "num_base_bdevs_operational": 2, 00:11:04.834 "base_bdevs_list": [ 00:11:04.834 { 00:11:04.834 "name": "BaseBdev1", 00:11:04.834 "uuid": "48cb8b0b-47d0-4304-a012-f3130c111957", 00:11:04.834 "is_configured": true, 00:11:04.834 "data_offset": 0, 00:11:04.834 "data_size": 65536 00:11:04.834 }, 00:11:04.834 { 00:11:04.834 "name": "BaseBdev2", 00:11:04.834 "uuid": "b4e90824-b84b-4727-b821-b772c1bb6d57", 00:11:04.834 "is_configured": true, 00:11:04.834 "data_offset": 0, 00:11:04.834 "data_size": 65536 00:11:04.834 } 00:11:04.834 ] 00:11:04.834 }' 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.834 07:08:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.401 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:05.401 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:05.401 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:05.401 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:05.401 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:05.401 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:05.401 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:05.401 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.401 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.401 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:05.401 [2024-11-20 07:08:02.468953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:05.401 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.401 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:05.401 "name": "Existed_Raid", 00:11:05.401 "aliases": [ 00:11:05.401 "adea4b09-1df3-4a49-bc2a-cf223d128b76" 00:11:05.401 ], 00:11:05.401 "product_name": "Raid Volume", 00:11:05.401 "block_size": 512, 00:11:05.401 "num_blocks": 131072, 00:11:05.401 "uuid": "adea4b09-1df3-4a49-bc2a-cf223d128b76", 00:11:05.401 "assigned_rate_limits": { 00:11:05.401 "rw_ios_per_sec": 0, 00:11:05.402 "rw_mbytes_per_sec": 0, 00:11:05.402 "r_mbytes_per_sec": 0, 00:11:05.402 "w_mbytes_per_sec": 0 00:11:05.402 }, 00:11:05.402 "claimed": false, 00:11:05.402 "zoned": false, 00:11:05.402 "supported_io_types": { 00:11:05.402 "read": true, 00:11:05.402 "write": true, 00:11:05.402 "unmap": true, 00:11:05.402 "flush": true, 00:11:05.402 "reset": true, 00:11:05.402 "nvme_admin": false, 00:11:05.402 "nvme_io": false, 00:11:05.402 "nvme_io_md": false, 00:11:05.402 "write_zeroes": true, 00:11:05.402 "zcopy": false, 00:11:05.402 "get_zone_info": false, 00:11:05.402 "zone_management": false, 00:11:05.402 "zone_append": false, 00:11:05.402 "compare": false, 00:11:05.402 "compare_and_write": false, 00:11:05.402 "abort": false, 00:11:05.402 "seek_hole": false, 00:11:05.402 "seek_data": false, 00:11:05.402 "copy": false, 00:11:05.402 "nvme_iov_md": false 00:11:05.402 }, 00:11:05.402 "memory_domains": [ 00:11:05.402 { 00:11:05.402 "dma_device_id": "system", 00:11:05.402 "dma_device_type": 1 00:11:05.402 }, 00:11:05.402 { 00:11:05.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.402 "dma_device_type": 2 00:11:05.402 }, 00:11:05.402 { 00:11:05.402 "dma_device_id": "system", 00:11:05.402 "dma_device_type": 1 00:11:05.402 }, 00:11:05.402 { 00:11:05.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.402 "dma_device_type": 2 00:11:05.402 } 00:11:05.402 ], 00:11:05.402 "driver_specific": { 00:11:05.402 "raid": { 00:11:05.402 "uuid": "adea4b09-1df3-4a49-bc2a-cf223d128b76", 00:11:05.402 "strip_size_kb": 64, 00:11:05.402 "state": "online", 00:11:05.402 "raid_level": "concat", 00:11:05.402 "superblock": false, 00:11:05.402 "num_base_bdevs": 2, 00:11:05.402 "num_base_bdevs_discovered": 2, 00:11:05.402 "num_base_bdevs_operational": 2, 00:11:05.402 "base_bdevs_list": [ 00:11:05.402 { 00:11:05.402 "name": "BaseBdev1", 00:11:05.402 "uuid": "48cb8b0b-47d0-4304-a012-f3130c111957", 00:11:05.402 "is_configured": true, 00:11:05.402 "data_offset": 0, 00:11:05.402 "data_size": 65536 00:11:05.402 }, 00:11:05.402 { 00:11:05.402 "name": "BaseBdev2", 00:11:05.402 "uuid": "b4e90824-b84b-4727-b821-b772c1bb6d57", 00:11:05.402 "is_configured": true, 00:11:05.402 "data_offset": 0, 00:11:05.402 "data_size": 65536 00:11:05.402 } 00:11:05.402 ] 00:11:05.402 } 00:11:05.402 } 00:11:05.402 }' 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:05.402 BaseBdev2' 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.402 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.678 [2024-11-20 07:08:02.736730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:05.678 [2024-11-20 07:08:02.736782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.678 [2024-11-20 07:08:02.736861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.678 "name": "Existed_Raid", 00:11:05.678 "uuid": "adea4b09-1df3-4a49-bc2a-cf223d128b76", 00:11:05.678 "strip_size_kb": 64, 00:11:05.678 "state": "offline", 00:11:05.678 "raid_level": "concat", 00:11:05.678 "superblock": false, 00:11:05.678 "num_base_bdevs": 2, 00:11:05.678 "num_base_bdevs_discovered": 1, 00:11:05.678 "num_base_bdevs_operational": 1, 00:11:05.678 "base_bdevs_list": [ 00:11:05.678 { 00:11:05.678 "name": null, 00:11:05.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.678 "is_configured": false, 00:11:05.678 "data_offset": 0, 00:11:05.678 "data_size": 65536 00:11:05.678 }, 00:11:05.678 { 00:11:05.678 "name": "BaseBdev2", 00:11:05.678 "uuid": "b4e90824-b84b-4727-b821-b772c1bb6d57", 00:11:05.678 "is_configured": true, 00:11:05.678 "data_offset": 0, 00:11:05.678 "data_size": 65536 00:11:05.678 } 00:11:05.678 ] 00:11:05.678 }' 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.678 07:08:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.278 [2024-11-20 07:08:03.435067] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.278 [2024-11-20 07:08:03.435138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61581 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61581 ']' 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61581 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.278 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61581 00:11:06.536 killing process with pid 61581 00:11:06.536 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.536 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.537 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61581' 00:11:06.537 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61581 00:11:06.537 [2024-11-20 07:08:03.616642] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.537 07:08:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61581 00:11:06.537 [2024-11-20 07:08:03.633244] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.472 ************************************ 00:11:07.472 END TEST raid_state_function_test 00:11:07.472 ************************************ 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:07.472 00:11:07.472 real 0m5.625s 00:11:07.472 user 0m8.493s 00:11:07.472 sys 0m0.819s 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.472 07:08:04 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:11:07.472 07:08:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:07.472 07:08:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.472 07:08:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.472 ************************************ 00:11:07.472 START TEST raid_state_function_test_sb 00:11:07.472 ************************************ 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:07.472 Process raid pid: 61834 00:11:07.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61834 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61834' 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61834 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61834 ']' 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.472 07:08:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:07.730 [2024-11-20 07:08:04.858461] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:11:07.730 [2024-11-20 07:08:04.859684] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.988 [2024-11-20 07:08:05.048599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.988 [2024-11-20 07:08:05.189935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.246 [2024-11-20 07:08:05.417023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.246 [2024-11-20 07:08:05.417373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.814 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.814 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:08.814 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:08.814 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.815 [2024-11-20 07:08:05.849567] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:08.815 [2024-11-20 07:08:05.849655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:08.815 [2024-11-20 07:08:05.849673] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:08.815 [2024-11-20 07:08:05.849689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.815 "name": "Existed_Raid", 00:11:08.815 "uuid": "10db6548-1917-47fa-86b0-dc2ddb4c969e", 00:11:08.815 "strip_size_kb": 64, 00:11:08.815 "state": "configuring", 00:11:08.815 "raid_level": "concat", 00:11:08.815 "superblock": true, 00:11:08.815 "num_base_bdevs": 2, 00:11:08.815 "num_base_bdevs_discovered": 0, 00:11:08.815 "num_base_bdevs_operational": 2, 00:11:08.815 "base_bdevs_list": [ 00:11:08.815 { 00:11:08.815 "name": "BaseBdev1", 00:11:08.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.815 "is_configured": false, 00:11:08.815 "data_offset": 0, 00:11:08.815 "data_size": 0 00:11:08.815 }, 00:11:08.815 { 00:11:08.815 "name": "BaseBdev2", 00:11:08.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.815 "is_configured": false, 00:11:08.815 "data_offset": 0, 00:11:08.815 "data_size": 0 00:11:08.815 } 00:11:08.815 ] 00:11:08.815 }' 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.815 07:08:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.073 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.073 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.073 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.073 [2024-11-20 07:08:06.377756] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.073 [2024-11-20 07:08:06.377955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:09.074 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.074 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:09.074 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.074 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.074 [2024-11-20 07:08:06.389765] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.074 [2024-11-20 07:08:06.389831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.074 [2024-11-20 07:08:06.389862] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.074 [2024-11-20 07:08:06.389898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.332 [2024-11-20 07:08:06.437080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.332 BaseBdev1 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:09.332 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.333 [ 00:11:09.333 { 00:11:09.333 "name": "BaseBdev1", 00:11:09.333 "aliases": [ 00:11:09.333 "c3f6ad50-77f1-4142-a669-57e817bea948" 00:11:09.333 ], 00:11:09.333 "product_name": "Malloc disk", 00:11:09.333 "block_size": 512, 00:11:09.333 "num_blocks": 65536, 00:11:09.333 "uuid": "c3f6ad50-77f1-4142-a669-57e817bea948", 00:11:09.333 "assigned_rate_limits": { 00:11:09.333 "rw_ios_per_sec": 0, 00:11:09.333 "rw_mbytes_per_sec": 0, 00:11:09.333 "r_mbytes_per_sec": 0, 00:11:09.333 "w_mbytes_per_sec": 0 00:11:09.333 }, 00:11:09.333 "claimed": true, 00:11:09.333 "claim_type": "exclusive_write", 00:11:09.333 "zoned": false, 00:11:09.333 "supported_io_types": { 00:11:09.333 "read": true, 00:11:09.333 "write": true, 00:11:09.333 "unmap": true, 00:11:09.333 "flush": true, 00:11:09.333 "reset": true, 00:11:09.333 "nvme_admin": false, 00:11:09.333 "nvme_io": false, 00:11:09.333 "nvme_io_md": false, 00:11:09.333 "write_zeroes": true, 00:11:09.333 "zcopy": true, 00:11:09.333 "get_zone_info": false, 00:11:09.333 "zone_management": false, 00:11:09.333 "zone_append": false, 00:11:09.333 "compare": false, 00:11:09.333 "compare_and_write": false, 00:11:09.333 "abort": true, 00:11:09.333 "seek_hole": false, 00:11:09.333 "seek_data": false, 00:11:09.333 "copy": true, 00:11:09.333 "nvme_iov_md": false 00:11:09.333 }, 00:11:09.333 "memory_domains": [ 00:11:09.333 { 00:11:09.333 "dma_device_id": "system", 00:11:09.333 "dma_device_type": 1 00:11:09.333 }, 00:11:09.333 { 00:11:09.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.333 "dma_device_type": 2 00:11:09.333 } 00:11:09.333 ], 00:11:09.333 "driver_specific": {} 00:11:09.333 } 00:11:09.333 ] 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.333 "name": "Existed_Raid", 00:11:09.333 "uuid": "7d42641f-8b0b-4585-be03-f2915e10bd7d", 00:11:09.333 "strip_size_kb": 64, 00:11:09.333 "state": "configuring", 00:11:09.333 "raid_level": "concat", 00:11:09.333 "superblock": true, 00:11:09.333 "num_base_bdevs": 2, 00:11:09.333 "num_base_bdevs_discovered": 1, 00:11:09.333 "num_base_bdevs_operational": 2, 00:11:09.333 "base_bdevs_list": [ 00:11:09.333 { 00:11:09.333 "name": "BaseBdev1", 00:11:09.333 "uuid": "c3f6ad50-77f1-4142-a669-57e817bea948", 00:11:09.333 "is_configured": true, 00:11:09.333 "data_offset": 2048, 00:11:09.333 "data_size": 63488 00:11:09.333 }, 00:11:09.333 { 00:11:09.333 "name": "BaseBdev2", 00:11:09.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.333 "is_configured": false, 00:11:09.333 "data_offset": 0, 00:11:09.333 "data_size": 0 00:11:09.333 } 00:11:09.333 ] 00:11:09.333 }' 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.333 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.900 [2024-11-20 07:08:06.985347] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.900 [2024-11-20 07:08:06.985407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.900 [2024-11-20 07:08:06.993440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:09.900 [2024-11-20 07:08:06.996183] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:09.900 [2024-11-20 07:08:06.996261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.900 07:08:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.900 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.900 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.900 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.900 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:09.900 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.900 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.900 "name": "Existed_Raid", 00:11:09.900 "uuid": "336630c6-11d5-4c5e-9858-1186fc23ceb1", 00:11:09.900 "strip_size_kb": 64, 00:11:09.900 "state": "configuring", 00:11:09.900 "raid_level": "concat", 00:11:09.900 "superblock": true, 00:11:09.900 "num_base_bdevs": 2, 00:11:09.900 "num_base_bdevs_discovered": 1, 00:11:09.900 "num_base_bdevs_operational": 2, 00:11:09.900 "base_bdevs_list": [ 00:11:09.900 { 00:11:09.900 "name": "BaseBdev1", 00:11:09.900 "uuid": "c3f6ad50-77f1-4142-a669-57e817bea948", 00:11:09.900 "is_configured": true, 00:11:09.900 "data_offset": 2048, 00:11:09.900 "data_size": 63488 00:11:09.900 }, 00:11:09.900 { 00:11:09.900 "name": "BaseBdev2", 00:11:09.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.900 "is_configured": false, 00:11:09.900 "data_offset": 0, 00:11:09.900 "data_size": 0 00:11:09.900 } 00:11:09.900 ] 00:11:09.900 }' 00:11:09.900 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.900 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.467 [2024-11-20 07:08:07.541434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:10.467 [2024-11-20 07:08:07.541739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:10.467 [2024-11-20 07:08:07.541759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:10.467 [2024-11-20 07:08:07.542150] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:10.467 BaseBdev2 00:11:10.467 [2024-11-20 07:08:07.542344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:10.467 [2024-11-20 07:08:07.542365] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:10.467 [2024-11-20 07:08:07.542539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.467 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.467 [ 00:11:10.467 { 00:11:10.467 "name": "BaseBdev2", 00:11:10.467 "aliases": [ 00:11:10.467 "175ea3f3-84c9-4635-a71d-f1ecd4c4a8fb" 00:11:10.467 ], 00:11:10.467 "product_name": "Malloc disk", 00:11:10.467 "block_size": 512, 00:11:10.467 "num_blocks": 65536, 00:11:10.467 "uuid": "175ea3f3-84c9-4635-a71d-f1ecd4c4a8fb", 00:11:10.467 "assigned_rate_limits": { 00:11:10.467 "rw_ios_per_sec": 0, 00:11:10.467 "rw_mbytes_per_sec": 0, 00:11:10.467 "r_mbytes_per_sec": 0, 00:11:10.467 "w_mbytes_per_sec": 0 00:11:10.467 }, 00:11:10.467 "claimed": true, 00:11:10.467 "claim_type": "exclusive_write", 00:11:10.467 "zoned": false, 00:11:10.467 "supported_io_types": { 00:11:10.467 "read": true, 00:11:10.467 "write": true, 00:11:10.467 "unmap": true, 00:11:10.468 "flush": true, 00:11:10.468 "reset": true, 00:11:10.468 "nvme_admin": false, 00:11:10.468 "nvme_io": false, 00:11:10.468 "nvme_io_md": false, 00:11:10.468 "write_zeroes": true, 00:11:10.468 "zcopy": true, 00:11:10.468 "get_zone_info": false, 00:11:10.468 "zone_management": false, 00:11:10.468 "zone_append": false, 00:11:10.468 "compare": false, 00:11:10.468 "compare_and_write": false, 00:11:10.468 "abort": true, 00:11:10.468 "seek_hole": false, 00:11:10.468 "seek_data": false, 00:11:10.468 "copy": true, 00:11:10.468 "nvme_iov_md": false 00:11:10.468 }, 00:11:10.468 "memory_domains": [ 00:11:10.468 { 00:11:10.468 "dma_device_id": "system", 00:11:10.468 "dma_device_type": 1 00:11:10.468 }, 00:11:10.468 { 00:11:10.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.468 "dma_device_type": 2 00:11:10.468 } 00:11:10.468 ], 00:11:10.468 "driver_specific": {} 00:11:10.468 } 00:11:10.468 ] 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.468 "name": "Existed_Raid", 00:11:10.468 "uuid": "336630c6-11d5-4c5e-9858-1186fc23ceb1", 00:11:10.468 "strip_size_kb": 64, 00:11:10.468 "state": "online", 00:11:10.468 "raid_level": "concat", 00:11:10.468 "superblock": true, 00:11:10.468 "num_base_bdevs": 2, 00:11:10.468 "num_base_bdevs_discovered": 2, 00:11:10.468 "num_base_bdevs_operational": 2, 00:11:10.468 "base_bdevs_list": [ 00:11:10.468 { 00:11:10.468 "name": "BaseBdev1", 00:11:10.468 "uuid": "c3f6ad50-77f1-4142-a669-57e817bea948", 00:11:10.468 "is_configured": true, 00:11:10.468 "data_offset": 2048, 00:11:10.468 "data_size": 63488 00:11:10.468 }, 00:11:10.468 { 00:11:10.468 "name": "BaseBdev2", 00:11:10.468 "uuid": "175ea3f3-84c9-4635-a71d-f1ecd4c4a8fb", 00:11:10.468 "is_configured": true, 00:11:10.468 "data_offset": 2048, 00:11:10.468 "data_size": 63488 00:11:10.468 } 00:11:10.468 ] 00:11:10.468 }' 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.468 07:08:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.127 [2024-11-20 07:08:08.126024] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:11.127 "name": "Existed_Raid", 00:11:11.127 "aliases": [ 00:11:11.127 "336630c6-11d5-4c5e-9858-1186fc23ceb1" 00:11:11.127 ], 00:11:11.127 "product_name": "Raid Volume", 00:11:11.127 "block_size": 512, 00:11:11.127 "num_blocks": 126976, 00:11:11.127 "uuid": "336630c6-11d5-4c5e-9858-1186fc23ceb1", 00:11:11.127 "assigned_rate_limits": { 00:11:11.127 "rw_ios_per_sec": 0, 00:11:11.127 "rw_mbytes_per_sec": 0, 00:11:11.127 "r_mbytes_per_sec": 0, 00:11:11.127 "w_mbytes_per_sec": 0 00:11:11.127 }, 00:11:11.127 "claimed": false, 00:11:11.127 "zoned": false, 00:11:11.127 "supported_io_types": { 00:11:11.127 "read": true, 00:11:11.127 "write": true, 00:11:11.127 "unmap": true, 00:11:11.127 "flush": true, 00:11:11.127 "reset": true, 00:11:11.127 "nvme_admin": false, 00:11:11.127 "nvme_io": false, 00:11:11.127 "nvme_io_md": false, 00:11:11.127 "write_zeroes": true, 00:11:11.127 "zcopy": false, 00:11:11.127 "get_zone_info": false, 00:11:11.127 "zone_management": false, 00:11:11.127 "zone_append": false, 00:11:11.127 "compare": false, 00:11:11.127 "compare_and_write": false, 00:11:11.127 "abort": false, 00:11:11.127 "seek_hole": false, 00:11:11.127 "seek_data": false, 00:11:11.127 "copy": false, 00:11:11.127 "nvme_iov_md": false 00:11:11.127 }, 00:11:11.127 "memory_domains": [ 00:11:11.127 { 00:11:11.127 "dma_device_id": "system", 00:11:11.127 "dma_device_type": 1 00:11:11.127 }, 00:11:11.127 { 00:11:11.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.127 "dma_device_type": 2 00:11:11.127 }, 00:11:11.127 { 00:11:11.127 "dma_device_id": "system", 00:11:11.127 "dma_device_type": 1 00:11:11.127 }, 00:11:11.127 { 00:11:11.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.127 "dma_device_type": 2 00:11:11.127 } 00:11:11.127 ], 00:11:11.127 "driver_specific": { 00:11:11.127 "raid": { 00:11:11.127 "uuid": "336630c6-11d5-4c5e-9858-1186fc23ceb1", 00:11:11.127 "strip_size_kb": 64, 00:11:11.127 "state": "online", 00:11:11.127 "raid_level": "concat", 00:11:11.127 "superblock": true, 00:11:11.127 "num_base_bdevs": 2, 00:11:11.127 "num_base_bdevs_discovered": 2, 00:11:11.127 "num_base_bdevs_operational": 2, 00:11:11.127 "base_bdevs_list": [ 00:11:11.127 { 00:11:11.127 "name": "BaseBdev1", 00:11:11.127 "uuid": "c3f6ad50-77f1-4142-a669-57e817bea948", 00:11:11.127 "is_configured": true, 00:11:11.127 "data_offset": 2048, 00:11:11.127 "data_size": 63488 00:11:11.127 }, 00:11:11.127 { 00:11:11.127 "name": "BaseBdev2", 00:11:11.127 "uuid": "175ea3f3-84c9-4635-a71d-f1ecd4c4a8fb", 00:11:11.127 "is_configured": true, 00:11:11.127 "data_offset": 2048, 00:11:11.127 "data_size": 63488 00:11:11.127 } 00:11:11.127 ] 00:11:11.127 } 00:11:11.127 } 00:11:11.127 }' 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:11.127 BaseBdev2' 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:11.127 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:11.128 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:11.128 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.128 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.128 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.128 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:11.128 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:11.128 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:11.128 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.128 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.128 [2024-11-20 07:08:08.393794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:11.128 [2024-11-20 07:08:08.393839] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.128 [2024-11-20 07:08:08.393939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.386 "name": "Existed_Raid", 00:11:11.386 "uuid": "336630c6-11d5-4c5e-9858-1186fc23ceb1", 00:11:11.386 "strip_size_kb": 64, 00:11:11.386 "state": "offline", 00:11:11.386 "raid_level": "concat", 00:11:11.386 "superblock": true, 00:11:11.386 "num_base_bdevs": 2, 00:11:11.386 "num_base_bdevs_discovered": 1, 00:11:11.386 "num_base_bdevs_operational": 1, 00:11:11.386 "base_bdevs_list": [ 00:11:11.386 { 00:11:11.386 "name": null, 00:11:11.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.386 "is_configured": false, 00:11:11.386 "data_offset": 0, 00:11:11.386 "data_size": 63488 00:11:11.386 }, 00:11:11.386 { 00:11:11.386 "name": "BaseBdev2", 00:11:11.386 "uuid": "175ea3f3-84c9-4635-a71d-f1ecd4c4a8fb", 00:11:11.386 "is_configured": true, 00:11:11.386 "data_offset": 2048, 00:11:11.386 "data_size": 63488 00:11:11.386 } 00:11:11.386 ] 00:11:11.386 }' 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.386 07:08:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.953 [2024-11-20 07:08:09.107973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:11.953 [2024-11-20 07:08:09.108038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61834 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61834 ']' 00:11:11.953 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61834 00:11:11.954 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:11.954 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.954 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61834 00:11:12.211 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.211 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.211 killing process with pid 61834 00:11:12.211 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61834' 00:11:12.211 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61834 00:11:12.211 [2024-11-20 07:08:09.275270] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:12.211 07:08:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61834 00:11:12.211 [2024-11-20 07:08:09.290331] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:13.144 07:08:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:13.144 00:11:13.144 real 0m5.596s 00:11:13.144 user 0m8.495s 00:11:13.144 sys 0m0.776s 00:11:13.144 07:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.144 07:08:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 ************************************ 00:11:13.144 END TEST raid_state_function_test_sb 00:11:13.144 ************************************ 00:11:13.144 07:08:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:11:13.144 07:08:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:13.144 07:08:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.144 07:08:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:13.144 ************************************ 00:11:13.144 START TEST raid_superblock_test 00:11:13.144 ************************************ 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62097 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62097 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62097 ']' 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.144 07:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.403 [2024-11-20 07:08:10.480209] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:11:13.403 [2024-11-20 07:08:10.480380] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62097 ] 00:11:13.403 [2024-11-20 07:08:10.656694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.662 [2024-11-20 07:08:10.787960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.919 [2024-11-20 07:08:10.988333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.919 [2024-11-20 07:08:10.988409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.524 malloc1 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.524 [2024-11-20 07:08:11.588746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:14.524 [2024-11-20 07:08:11.588818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.524 [2024-11-20 07:08:11.588878] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:14.524 [2024-11-20 07:08:11.588896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.524 [2024-11-20 07:08:11.591629] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.524 [2024-11-20 07:08:11.591669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:14.524 pt1 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.524 malloc2 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.524 [2024-11-20 07:08:11.644721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:14.524 [2024-11-20 07:08:11.644798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.524 [2024-11-20 07:08:11.644846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:14.524 [2024-11-20 07:08:11.644861] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.524 [2024-11-20 07:08:11.647833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.524 [2024-11-20 07:08:11.647889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:14.524 pt2 00:11:14.524 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.525 [2024-11-20 07:08:11.656932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:14.525 [2024-11-20 07:08:11.659432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:14.525 [2024-11-20 07:08:11.659660] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:14.525 [2024-11-20 07:08:11.659679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:14.525 [2024-11-20 07:08:11.660052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:14.525 [2024-11-20 07:08:11.660294] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:14.525 [2024-11-20 07:08:11.660326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:14.525 [2024-11-20 07:08:11.660531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.525 "name": "raid_bdev1", 00:11:14.525 "uuid": "9cda0238-5116-4369-82b6-eda51371a15a", 00:11:14.525 "strip_size_kb": 64, 00:11:14.525 "state": "online", 00:11:14.525 "raid_level": "concat", 00:11:14.525 "superblock": true, 00:11:14.525 "num_base_bdevs": 2, 00:11:14.525 "num_base_bdevs_discovered": 2, 00:11:14.525 "num_base_bdevs_operational": 2, 00:11:14.525 "base_bdevs_list": [ 00:11:14.525 { 00:11:14.525 "name": "pt1", 00:11:14.525 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:14.525 "is_configured": true, 00:11:14.525 "data_offset": 2048, 00:11:14.525 "data_size": 63488 00:11:14.525 }, 00:11:14.525 { 00:11:14.525 "name": "pt2", 00:11:14.525 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:14.525 "is_configured": true, 00:11:14.525 "data_offset": 2048, 00:11:14.525 "data_size": 63488 00:11:14.525 } 00:11:14.525 ] 00:11:14.525 }' 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.525 07:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.093 [2024-11-20 07:08:12.237362] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.093 "name": "raid_bdev1", 00:11:15.093 "aliases": [ 00:11:15.093 "9cda0238-5116-4369-82b6-eda51371a15a" 00:11:15.093 ], 00:11:15.093 "product_name": "Raid Volume", 00:11:15.093 "block_size": 512, 00:11:15.093 "num_blocks": 126976, 00:11:15.093 "uuid": "9cda0238-5116-4369-82b6-eda51371a15a", 00:11:15.093 "assigned_rate_limits": { 00:11:15.093 "rw_ios_per_sec": 0, 00:11:15.093 "rw_mbytes_per_sec": 0, 00:11:15.093 "r_mbytes_per_sec": 0, 00:11:15.093 "w_mbytes_per_sec": 0 00:11:15.093 }, 00:11:15.093 "claimed": false, 00:11:15.093 "zoned": false, 00:11:15.093 "supported_io_types": { 00:11:15.093 "read": true, 00:11:15.093 "write": true, 00:11:15.093 "unmap": true, 00:11:15.093 "flush": true, 00:11:15.093 "reset": true, 00:11:15.093 "nvme_admin": false, 00:11:15.093 "nvme_io": false, 00:11:15.093 "nvme_io_md": false, 00:11:15.093 "write_zeroes": true, 00:11:15.093 "zcopy": false, 00:11:15.093 "get_zone_info": false, 00:11:15.093 "zone_management": false, 00:11:15.093 "zone_append": false, 00:11:15.093 "compare": false, 00:11:15.093 "compare_and_write": false, 00:11:15.093 "abort": false, 00:11:15.093 "seek_hole": false, 00:11:15.093 "seek_data": false, 00:11:15.093 "copy": false, 00:11:15.093 "nvme_iov_md": false 00:11:15.093 }, 00:11:15.093 "memory_domains": [ 00:11:15.093 { 00:11:15.093 "dma_device_id": "system", 00:11:15.093 "dma_device_type": 1 00:11:15.093 }, 00:11:15.093 { 00:11:15.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.093 "dma_device_type": 2 00:11:15.093 }, 00:11:15.093 { 00:11:15.093 "dma_device_id": "system", 00:11:15.093 "dma_device_type": 1 00:11:15.093 }, 00:11:15.093 { 00:11:15.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.093 "dma_device_type": 2 00:11:15.093 } 00:11:15.093 ], 00:11:15.093 "driver_specific": { 00:11:15.093 "raid": { 00:11:15.093 "uuid": "9cda0238-5116-4369-82b6-eda51371a15a", 00:11:15.093 "strip_size_kb": 64, 00:11:15.093 "state": "online", 00:11:15.093 "raid_level": "concat", 00:11:15.093 "superblock": true, 00:11:15.093 "num_base_bdevs": 2, 00:11:15.093 "num_base_bdevs_discovered": 2, 00:11:15.093 "num_base_bdevs_operational": 2, 00:11:15.093 "base_bdevs_list": [ 00:11:15.093 { 00:11:15.093 "name": "pt1", 00:11:15.093 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.093 "is_configured": true, 00:11:15.093 "data_offset": 2048, 00:11:15.093 "data_size": 63488 00:11:15.093 }, 00:11:15.093 { 00:11:15.093 "name": "pt2", 00:11:15.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.093 "is_configured": true, 00:11:15.093 "data_offset": 2048, 00:11:15.093 "data_size": 63488 00:11:15.093 } 00:11:15.093 ] 00:11:15.093 } 00:11:15.093 } 00:11:15.093 }' 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:15.093 pt2' 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.093 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:15.353 [2024-11-20 07:08:12.489384] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9cda0238-5116-4369-82b6-eda51371a15a 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9cda0238-5116-4369-82b6-eda51371a15a ']' 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.353 [2024-11-20 07:08:12.533061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.353 [2024-11-20 07:08:12.533098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.353 [2024-11-20 07:08:12.533204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.353 [2024-11-20 07:08:12.533270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.353 [2024-11-20 07:08:12.533293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.353 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.354 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.614 [2024-11-20 07:08:12.673133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:15.614 [2024-11-20 07:08:12.675760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:15.614 [2024-11-20 07:08:12.675895] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:15.614 [2024-11-20 07:08:12.675968] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:15.614 [2024-11-20 07:08:12.675993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.614 [2024-11-20 07:08:12.676010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:15.614 request: 00:11:15.614 { 00:11:15.614 "name": "raid_bdev1", 00:11:15.614 "raid_level": "concat", 00:11:15.614 "base_bdevs": [ 00:11:15.614 "malloc1", 00:11:15.614 "malloc2" 00:11:15.614 ], 00:11:15.614 "strip_size_kb": 64, 00:11:15.614 "superblock": false, 00:11:15.614 "method": "bdev_raid_create", 00:11:15.614 "req_id": 1 00:11:15.614 } 00:11:15.614 Got JSON-RPC error response 00:11:15.614 response: 00:11:15.614 { 00:11:15.614 "code": -17, 00:11:15.614 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:15.614 } 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.614 [2024-11-20 07:08:12.733152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:15.614 [2024-11-20 07:08:12.733232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.614 [2024-11-20 07:08:12.733262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:15.614 [2024-11-20 07:08:12.733280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.614 [2024-11-20 07:08:12.736213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.614 [2024-11-20 07:08:12.736261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:15.614 [2024-11-20 07:08:12.736368] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:15.614 [2024-11-20 07:08:12.736448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:15.614 pt1 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.614 "name": "raid_bdev1", 00:11:15.614 "uuid": "9cda0238-5116-4369-82b6-eda51371a15a", 00:11:15.614 "strip_size_kb": 64, 00:11:15.614 "state": "configuring", 00:11:15.614 "raid_level": "concat", 00:11:15.614 "superblock": true, 00:11:15.614 "num_base_bdevs": 2, 00:11:15.614 "num_base_bdevs_discovered": 1, 00:11:15.614 "num_base_bdevs_operational": 2, 00:11:15.614 "base_bdevs_list": [ 00:11:15.614 { 00:11:15.614 "name": "pt1", 00:11:15.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.614 "is_configured": true, 00:11:15.614 "data_offset": 2048, 00:11:15.614 "data_size": 63488 00:11:15.614 }, 00:11:15.614 { 00:11:15.614 "name": null, 00:11:15.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.614 "is_configured": false, 00:11:15.614 "data_offset": 2048, 00:11:15.614 "data_size": 63488 00:11:15.614 } 00:11:15.614 ] 00:11:15.614 }' 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.614 07:08:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.181 [2024-11-20 07:08:13.237286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:16.181 [2024-11-20 07:08:13.237380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.181 [2024-11-20 07:08:13.237424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:16.181 [2024-11-20 07:08:13.237442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.181 [2024-11-20 07:08:13.238038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.181 [2024-11-20 07:08:13.238075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:16.181 [2024-11-20 07:08:13.238174] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:16.181 [2024-11-20 07:08:13.238215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:16.181 [2024-11-20 07:08:13.238388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:16.181 [2024-11-20 07:08:13.238409] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:16.181 [2024-11-20 07:08:13.238710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:16.181 [2024-11-20 07:08:13.238939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:16.181 [2024-11-20 07:08:13.238956] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:16.181 [2024-11-20 07:08:13.239124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.181 pt2 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.181 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.182 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.182 "name": "raid_bdev1", 00:11:16.182 "uuid": "9cda0238-5116-4369-82b6-eda51371a15a", 00:11:16.182 "strip_size_kb": 64, 00:11:16.182 "state": "online", 00:11:16.182 "raid_level": "concat", 00:11:16.182 "superblock": true, 00:11:16.182 "num_base_bdevs": 2, 00:11:16.182 "num_base_bdevs_discovered": 2, 00:11:16.182 "num_base_bdevs_operational": 2, 00:11:16.182 "base_bdevs_list": [ 00:11:16.182 { 00:11:16.182 "name": "pt1", 00:11:16.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.182 "is_configured": true, 00:11:16.182 "data_offset": 2048, 00:11:16.182 "data_size": 63488 00:11:16.182 }, 00:11:16.182 { 00:11:16.182 "name": "pt2", 00:11:16.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.182 "is_configured": true, 00:11:16.182 "data_offset": 2048, 00:11:16.182 "data_size": 63488 00:11:16.182 } 00:11:16.182 ] 00:11:16.182 }' 00:11:16.182 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.182 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.751 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:16.751 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:16.751 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.751 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.751 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.751 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.751 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:16.751 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.751 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.751 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.751 [2024-11-20 07:08:13.765761] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.751 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.751 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.751 "name": "raid_bdev1", 00:11:16.751 "aliases": [ 00:11:16.751 "9cda0238-5116-4369-82b6-eda51371a15a" 00:11:16.751 ], 00:11:16.751 "product_name": "Raid Volume", 00:11:16.751 "block_size": 512, 00:11:16.751 "num_blocks": 126976, 00:11:16.751 "uuid": "9cda0238-5116-4369-82b6-eda51371a15a", 00:11:16.751 "assigned_rate_limits": { 00:11:16.751 "rw_ios_per_sec": 0, 00:11:16.751 "rw_mbytes_per_sec": 0, 00:11:16.751 "r_mbytes_per_sec": 0, 00:11:16.751 "w_mbytes_per_sec": 0 00:11:16.751 }, 00:11:16.751 "claimed": false, 00:11:16.751 "zoned": false, 00:11:16.751 "supported_io_types": { 00:11:16.751 "read": true, 00:11:16.751 "write": true, 00:11:16.751 "unmap": true, 00:11:16.751 "flush": true, 00:11:16.751 "reset": true, 00:11:16.751 "nvme_admin": false, 00:11:16.751 "nvme_io": false, 00:11:16.751 "nvme_io_md": false, 00:11:16.751 "write_zeroes": true, 00:11:16.751 "zcopy": false, 00:11:16.751 "get_zone_info": false, 00:11:16.751 "zone_management": false, 00:11:16.751 "zone_append": false, 00:11:16.751 "compare": false, 00:11:16.751 "compare_and_write": false, 00:11:16.751 "abort": false, 00:11:16.751 "seek_hole": false, 00:11:16.751 "seek_data": false, 00:11:16.751 "copy": false, 00:11:16.751 "nvme_iov_md": false 00:11:16.751 }, 00:11:16.751 "memory_domains": [ 00:11:16.751 { 00:11:16.751 "dma_device_id": "system", 00:11:16.751 "dma_device_type": 1 00:11:16.751 }, 00:11:16.751 { 00:11:16.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.751 "dma_device_type": 2 00:11:16.751 }, 00:11:16.751 { 00:11:16.751 "dma_device_id": "system", 00:11:16.751 "dma_device_type": 1 00:11:16.751 }, 00:11:16.751 { 00:11:16.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.751 "dma_device_type": 2 00:11:16.751 } 00:11:16.751 ], 00:11:16.751 "driver_specific": { 00:11:16.751 "raid": { 00:11:16.751 "uuid": "9cda0238-5116-4369-82b6-eda51371a15a", 00:11:16.751 "strip_size_kb": 64, 00:11:16.751 "state": "online", 00:11:16.751 "raid_level": "concat", 00:11:16.751 "superblock": true, 00:11:16.751 "num_base_bdevs": 2, 00:11:16.751 "num_base_bdevs_discovered": 2, 00:11:16.752 "num_base_bdevs_operational": 2, 00:11:16.752 "base_bdevs_list": [ 00:11:16.752 { 00:11:16.752 "name": "pt1", 00:11:16.752 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.752 "is_configured": true, 00:11:16.752 "data_offset": 2048, 00:11:16.752 "data_size": 63488 00:11:16.752 }, 00:11:16.752 { 00:11:16.752 "name": "pt2", 00:11:16.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.752 "is_configured": true, 00:11:16.752 "data_offset": 2048, 00:11:16.752 "data_size": 63488 00:11:16.752 } 00:11:16.752 ] 00:11:16.752 } 00:11:16.752 } 00:11:16.752 }' 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:16.752 pt2' 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.752 07:08:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.752 07:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.752 07:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.752 07:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:16.752 07:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:16.752 07:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.752 07:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.752 [2024-11-20 07:08:14.029739] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.752 07:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.752 07:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9cda0238-5116-4369-82b6-eda51371a15a '!=' 9cda0238-5116-4369-82b6-eda51371a15a ']' 00:11:16.752 07:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:16.752 07:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.752 07:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:16.752 07:08:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62097 00:11:17.010 07:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62097 ']' 00:11:17.010 07:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62097 00:11:17.010 07:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:17.010 07:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.010 07:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62097 00:11:17.011 07:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.011 07:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.011 killing process with pid 62097 00:11:17.011 07:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62097' 00:11:17.011 07:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62097 00:11:17.011 [2024-11-20 07:08:14.098309] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:17.011 07:08:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62097 00:11:17.011 [2024-11-20 07:08:14.098417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.011 [2024-11-20 07:08:14.098499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.011 [2024-11-20 07:08:14.098519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:17.011 [2024-11-20 07:08:14.284218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:17.973 07:08:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:17.973 00:11:17.973 real 0m4.900s 00:11:17.973 user 0m7.284s 00:11:17.973 sys 0m0.699s 00:11:17.973 07:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.973 07:08:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.973 ************************************ 00:11:17.973 END TEST raid_superblock_test 00:11:17.973 ************************************ 00:11:18.234 07:08:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:11:18.234 07:08:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:18.234 07:08:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.234 07:08:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:18.234 ************************************ 00:11:18.234 START TEST raid_read_error_test 00:11:18.234 ************************************ 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.NAhP9nlK6c 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62303 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62303 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62303 ']' 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.234 07:08:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.234 [2024-11-20 07:08:15.459754] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:11:18.234 [2024-11-20 07:08:15.459977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62303 ] 00:11:18.494 [2024-11-20 07:08:15.647985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.494 [2024-11-20 07:08:15.779907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.753 [2024-11-20 07:08:15.983955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:18.753 [2024-11-20 07:08:15.984032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.321 BaseBdev1_malloc 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.321 true 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.321 [2024-11-20 07:08:16.477076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:19.321 [2024-11-20 07:08:16.477142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.321 [2024-11-20 07:08:16.477172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:19.321 [2024-11-20 07:08:16.477191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.321 [2024-11-20 07:08:16.479976] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.321 [2024-11-20 07:08:16.480026] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:19.321 BaseBdev1 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.321 BaseBdev2_malloc 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.321 true 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.321 [2024-11-20 07:08:16.532659] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:19.321 [2024-11-20 07:08:16.532723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.321 [2024-11-20 07:08:16.532749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:19.321 [2024-11-20 07:08:16.532768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.321 [2024-11-20 07:08:16.535487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.321 [2024-11-20 07:08:16.535531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:19.321 BaseBdev2 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.321 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.321 [2024-11-20 07:08:16.540737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.321 [2024-11-20 07:08:16.543131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.321 [2024-11-20 07:08:16.543395] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:19.322 [2024-11-20 07:08:16.543418] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:19.322 [2024-11-20 07:08:16.543709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:19.322 [2024-11-20 07:08:16.543993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:19.322 [2024-11-20 07:08:16.544022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:19.322 [2024-11-20 07:08:16.544207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.322 "name": "raid_bdev1", 00:11:19.322 "uuid": "62374861-998e-41a6-9890-9940f65a9ccb", 00:11:19.322 "strip_size_kb": 64, 00:11:19.322 "state": "online", 00:11:19.322 "raid_level": "concat", 00:11:19.322 "superblock": true, 00:11:19.322 "num_base_bdevs": 2, 00:11:19.322 "num_base_bdevs_discovered": 2, 00:11:19.322 "num_base_bdevs_operational": 2, 00:11:19.322 "base_bdevs_list": [ 00:11:19.322 { 00:11:19.322 "name": "BaseBdev1", 00:11:19.322 "uuid": "ab2cdcf3-5460-517e-b470-120ce07ab486", 00:11:19.322 "is_configured": true, 00:11:19.322 "data_offset": 2048, 00:11:19.322 "data_size": 63488 00:11:19.322 }, 00:11:19.322 { 00:11:19.322 "name": "BaseBdev2", 00:11:19.322 "uuid": "830b37e9-0cd2-536c-81fa-1d1ceb854098", 00:11:19.322 "is_configured": true, 00:11:19.322 "data_offset": 2048, 00:11:19.322 "data_size": 63488 00:11:19.322 } 00:11:19.322 ] 00:11:19.322 }' 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.322 07:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.890 07:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:19.890 07:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:19.890 [2024-11-20 07:08:17.134314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.826 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.826 "name": "raid_bdev1", 00:11:20.826 "uuid": "62374861-998e-41a6-9890-9940f65a9ccb", 00:11:20.826 "strip_size_kb": 64, 00:11:20.827 "state": "online", 00:11:20.827 "raid_level": "concat", 00:11:20.827 "superblock": true, 00:11:20.827 "num_base_bdevs": 2, 00:11:20.827 "num_base_bdevs_discovered": 2, 00:11:20.827 "num_base_bdevs_operational": 2, 00:11:20.827 "base_bdevs_list": [ 00:11:20.827 { 00:11:20.827 "name": "BaseBdev1", 00:11:20.827 "uuid": "ab2cdcf3-5460-517e-b470-120ce07ab486", 00:11:20.827 "is_configured": true, 00:11:20.827 "data_offset": 2048, 00:11:20.827 "data_size": 63488 00:11:20.827 }, 00:11:20.827 { 00:11:20.827 "name": "BaseBdev2", 00:11:20.827 "uuid": "830b37e9-0cd2-536c-81fa-1d1ceb854098", 00:11:20.827 "is_configured": true, 00:11:20.827 "data_offset": 2048, 00:11:20.827 "data_size": 63488 00:11:20.827 } 00:11:20.827 ] 00:11:20.827 }' 00:11:20.827 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.827 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.394 [2024-11-20 07:08:18.536755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:21.394 [2024-11-20 07:08:18.536968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.394 [2024-11-20 07:08:18.540447] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.394 [2024-11-20 07:08:18.540503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.394 [2024-11-20 07:08:18.540547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.394 [2024-11-20 07:08:18.540568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:21.394 { 00:11:21.394 "results": [ 00:11:21.394 { 00:11:21.394 "job": "raid_bdev1", 00:11:21.394 "core_mask": "0x1", 00:11:21.394 "workload": "randrw", 00:11:21.394 "percentage": 50, 00:11:21.394 "status": "finished", 00:11:21.394 "queue_depth": 1, 00:11:21.394 "io_size": 131072, 00:11:21.394 "runtime": 1.40036, 00:11:21.394 "iops": 10834.356879659515, 00:11:21.394 "mibps": 1354.2946099574394, 00:11:21.394 "io_failed": 1, 00:11:21.394 "io_timeout": 0, 00:11:21.394 "avg_latency_us": 128.94417667747135, 00:11:21.394 "min_latency_us": 40.261818181818185, 00:11:21.394 "max_latency_us": 1891.6072727272726 00:11:21.394 } 00:11:21.394 ], 00:11:21.394 "core_count": 1 00:11:21.394 } 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62303 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62303 ']' 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62303 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62303 00:11:21.394 killing process with pid 62303 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62303' 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62303 00:11:21.394 [2024-11-20 07:08:18.578727] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.394 07:08:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62303 00:11:21.394 [2024-11-20 07:08:18.700659] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.770 07:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.NAhP9nlK6c 00:11:22.770 07:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:22.770 07:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:22.770 07:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:22.770 07:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:22.770 07:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:22.770 07:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:22.770 07:08:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:22.770 00:11:22.770 real 0m4.460s 00:11:22.770 user 0m5.558s 00:11:22.770 sys 0m0.550s 00:11:22.770 07:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.770 07:08:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.770 ************************************ 00:11:22.770 END TEST raid_read_error_test 00:11:22.770 ************************************ 00:11:22.770 07:08:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:11:22.770 07:08:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:22.770 07:08:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.770 07:08:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.770 ************************************ 00:11:22.770 START TEST raid_write_error_test 00:11:22.770 ************************************ 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PSZm72qi34 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62453 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62453 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62453 ']' 00:11:22.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.770 07:08:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.770 [2024-11-20 07:08:19.959806] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:11:22.770 [2024-11-20 07:08:19.960018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62453 ] 00:11:23.029 [2024-11-20 07:08:20.138552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.029 [2024-11-20 07:08:20.269070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.289 [2024-11-20 07:08:20.473210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.289 [2024-11-20 07:08:20.473360] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.857 BaseBdev1_malloc 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.857 true 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.857 [2024-11-20 07:08:20.946300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:23.857 [2024-11-20 07:08:20.946384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.857 [2024-11-20 07:08:20.946413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:23.857 [2024-11-20 07:08:20.946430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.857 [2024-11-20 07:08:20.949295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.857 [2024-11-20 07:08:20.949361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:23.857 BaseBdev1 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.857 BaseBdev2_malloc 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.857 true 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.857 07:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.857 [2024-11-20 07:08:21.002522] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:23.857 [2024-11-20 07:08:21.002597] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.857 [2024-11-20 07:08:21.002624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:23.857 [2024-11-20 07:08:21.002646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.857 [2024-11-20 07:08:21.005549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.857 [2024-11-20 07:08:21.005598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:23.857 BaseBdev2 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.857 [2024-11-20 07:08:21.010592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.857 [2024-11-20 07:08:21.013231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.857 [2024-11-20 07:08:21.013487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:23.857 [2024-11-20 07:08:21.013510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:11:23.857 [2024-11-20 07:08:21.013793] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:23.857 [2024-11-20 07:08:21.014197] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:23.857 [2024-11-20 07:08:21.014326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:23.857 [2024-11-20 07:08:21.014743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.857 07:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.858 07:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.858 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.858 07:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.858 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.858 "name": "raid_bdev1", 00:11:23.858 "uuid": "b8093fc4-a188-41e3-95a3-a00f3d4f05ce", 00:11:23.858 "strip_size_kb": 64, 00:11:23.858 "state": "online", 00:11:23.858 "raid_level": "concat", 00:11:23.858 "superblock": true, 00:11:23.858 "num_base_bdevs": 2, 00:11:23.858 "num_base_bdevs_discovered": 2, 00:11:23.858 "num_base_bdevs_operational": 2, 00:11:23.858 "base_bdevs_list": [ 00:11:23.858 { 00:11:23.858 "name": "BaseBdev1", 00:11:23.858 "uuid": "0c6bd382-2637-565c-a5ac-df69f8c116f2", 00:11:23.858 "is_configured": true, 00:11:23.858 "data_offset": 2048, 00:11:23.858 "data_size": 63488 00:11:23.858 }, 00:11:23.858 { 00:11:23.858 "name": "BaseBdev2", 00:11:23.858 "uuid": "8c4601a7-eb77-5bc2-b20e-565818f86df6", 00:11:23.858 "is_configured": true, 00:11:23.858 "data_offset": 2048, 00:11:23.858 "data_size": 63488 00:11:23.858 } 00:11:23.858 ] 00:11:23.858 }' 00:11:23.858 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.858 07:08:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.431 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:24.431 07:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:24.431 [2024-11-20 07:08:21.656383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.368 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.368 "name": "raid_bdev1", 00:11:25.369 "uuid": "b8093fc4-a188-41e3-95a3-a00f3d4f05ce", 00:11:25.369 "strip_size_kb": 64, 00:11:25.369 "state": "online", 00:11:25.369 "raid_level": "concat", 00:11:25.369 "superblock": true, 00:11:25.369 "num_base_bdevs": 2, 00:11:25.369 "num_base_bdevs_discovered": 2, 00:11:25.369 "num_base_bdevs_operational": 2, 00:11:25.369 "base_bdevs_list": [ 00:11:25.369 { 00:11:25.369 "name": "BaseBdev1", 00:11:25.369 "uuid": "0c6bd382-2637-565c-a5ac-df69f8c116f2", 00:11:25.369 "is_configured": true, 00:11:25.369 "data_offset": 2048, 00:11:25.369 "data_size": 63488 00:11:25.369 }, 00:11:25.369 { 00:11:25.369 "name": "BaseBdev2", 00:11:25.369 "uuid": "8c4601a7-eb77-5bc2-b20e-565818f86df6", 00:11:25.369 "is_configured": true, 00:11:25.369 "data_offset": 2048, 00:11:25.369 "data_size": 63488 00:11:25.369 } 00:11:25.369 ] 00:11:25.369 }' 00:11:25.369 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.369 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.934 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.934 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.934 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.934 [2024-11-20 07:08:22.990211] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.934 [2024-11-20 07:08:22.990465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.934 [2024-11-20 07:08:22.993987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.934 [2024-11-20 07:08:22.994174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.934 [2024-11-20 07:08:22.994263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.934 [2024-11-20 07:08:22.994530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:25.934 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.934 { 00:11:25.934 "results": [ 00:11:25.934 { 00:11:25.934 "job": "raid_bdev1", 00:11:25.934 "core_mask": "0x1", 00:11:25.934 "workload": "randrw", 00:11:25.934 "percentage": 50, 00:11:25.934 "status": "finished", 00:11:25.934 "queue_depth": 1, 00:11:25.934 "io_size": 131072, 00:11:25.934 "runtime": 1.331771, 00:11:25.934 "iops": 10940.319319162229, 00:11:25.934 "mibps": 1367.5399148952786, 00:11:25.934 "io_failed": 1, 00:11:25.934 "io_timeout": 0, 00:11:25.934 "avg_latency_us": 127.52200747437311, 00:11:25.934 "min_latency_us": 40.49454545454545, 00:11:25.934 "max_latency_us": 1869.2654545454545 00:11:25.934 } 00:11:25.934 ], 00:11:25.934 "core_count": 1 00:11:25.934 } 00:11:25.934 07:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62453 00:11:25.934 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62453 ']' 00:11:25.934 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62453 00:11:25.934 07:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:25.934 07:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:25.934 07:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62453 00:11:25.934 killing process with pid 62453 00:11:25.934 07:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:25.934 07:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:25.934 07:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62453' 00:11:25.934 07:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62453 00:11:25.934 [2024-11-20 07:08:23.029051] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:25.934 07:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62453 00:11:25.934 [2024-11-20 07:08:23.150409] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:27.310 07:08:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PSZm72qi34 00:11:27.310 07:08:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:27.310 07:08:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:27.310 07:08:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:27.310 07:08:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:27.310 07:08:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:27.310 07:08:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:27.310 07:08:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:27.310 00:11:27.310 real 0m4.392s 00:11:27.310 user 0m5.472s 00:11:27.310 sys 0m0.533s 00:11:27.310 07:08:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.310 ************************************ 00:11:27.310 END TEST raid_write_error_test 00:11:27.310 ************************************ 00:11:27.310 07:08:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.310 07:08:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:27.310 07:08:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:11:27.310 07:08:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:27.310 07:08:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.310 07:08:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:27.310 ************************************ 00:11:27.310 START TEST raid_state_function_test 00:11:27.310 ************************************ 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:27.310 Process raid pid: 62592 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62592 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62592' 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62592 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62592 ']' 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:27.310 07:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.310 [2024-11-20 07:08:24.402224] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:11:27.310 [2024-11-20 07:08:24.403192] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.310 [2024-11-20 07:08:24.597210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.569 [2024-11-20 07:08:24.724389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.827 [2024-11-20 07:08:24.926341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.828 [2024-11-20 07:08:24.926403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.394 [2024-11-20 07:08:25.418288] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.394 [2024-11-20 07:08:25.418365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.394 [2024-11-20 07:08:25.418381] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.394 [2024-11-20 07:08:25.418397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.394 "name": "Existed_Raid", 00:11:28.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.394 "strip_size_kb": 0, 00:11:28.394 "state": "configuring", 00:11:28.394 "raid_level": "raid1", 00:11:28.394 "superblock": false, 00:11:28.394 "num_base_bdevs": 2, 00:11:28.394 "num_base_bdevs_discovered": 0, 00:11:28.394 "num_base_bdevs_operational": 2, 00:11:28.394 "base_bdevs_list": [ 00:11:28.394 { 00:11:28.394 "name": "BaseBdev1", 00:11:28.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.394 "is_configured": false, 00:11:28.394 "data_offset": 0, 00:11:28.394 "data_size": 0 00:11:28.394 }, 00:11:28.394 { 00:11:28.394 "name": "BaseBdev2", 00:11:28.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.394 "is_configured": false, 00:11:28.394 "data_offset": 0, 00:11:28.394 "data_size": 0 00:11:28.394 } 00:11:28.394 ] 00:11:28.394 }' 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.394 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.653 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:28.653 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.653 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.653 [2024-11-20 07:08:25.890361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:28.654 [2024-11-20 07:08:25.890400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.654 [2024-11-20 07:08:25.898323] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.654 [2024-11-20 07:08:25.898540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.654 [2024-11-20 07:08:25.898705] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.654 [2024-11-20 07:08:25.898856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.654 [2024-11-20 07:08:25.943748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.654 BaseBdev1 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.654 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.654 [ 00:11:28.654 { 00:11:28.654 "name": "BaseBdev1", 00:11:28.654 "aliases": [ 00:11:28.654 "1ac32e26-b25e-44a3-adbf-49a792b76b4e" 00:11:28.654 ], 00:11:28.654 "product_name": "Malloc disk", 00:11:28.654 "block_size": 512, 00:11:28.654 "num_blocks": 65536, 00:11:28.654 "uuid": "1ac32e26-b25e-44a3-adbf-49a792b76b4e", 00:11:28.654 "assigned_rate_limits": { 00:11:28.654 "rw_ios_per_sec": 0, 00:11:28.654 "rw_mbytes_per_sec": 0, 00:11:28.654 "r_mbytes_per_sec": 0, 00:11:28.654 "w_mbytes_per_sec": 0 00:11:28.654 }, 00:11:28.654 "claimed": true, 00:11:28.654 "claim_type": "exclusive_write", 00:11:28.654 "zoned": false, 00:11:28.654 "supported_io_types": { 00:11:28.654 "read": true, 00:11:28.654 "write": true, 00:11:28.654 "unmap": true, 00:11:28.654 "flush": true, 00:11:28.654 "reset": true, 00:11:28.912 "nvme_admin": false, 00:11:28.912 "nvme_io": false, 00:11:28.912 "nvme_io_md": false, 00:11:28.912 "write_zeroes": true, 00:11:28.912 "zcopy": true, 00:11:28.912 "get_zone_info": false, 00:11:28.912 "zone_management": false, 00:11:28.912 "zone_append": false, 00:11:28.912 "compare": false, 00:11:28.912 "compare_and_write": false, 00:11:28.912 "abort": true, 00:11:28.912 "seek_hole": false, 00:11:28.912 "seek_data": false, 00:11:28.912 "copy": true, 00:11:28.912 "nvme_iov_md": false 00:11:28.912 }, 00:11:28.912 "memory_domains": [ 00:11:28.912 { 00:11:28.912 "dma_device_id": "system", 00:11:28.912 "dma_device_type": 1 00:11:28.912 }, 00:11:28.912 { 00:11:28.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.912 "dma_device_type": 2 00:11:28.912 } 00:11:28.912 ], 00:11:28.912 "driver_specific": {} 00:11:28.912 } 00:11:28.912 ] 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.912 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.913 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.913 07:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.913 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.913 "name": "Existed_Raid", 00:11:28.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.913 "strip_size_kb": 0, 00:11:28.913 "state": "configuring", 00:11:28.913 "raid_level": "raid1", 00:11:28.913 "superblock": false, 00:11:28.913 "num_base_bdevs": 2, 00:11:28.913 "num_base_bdevs_discovered": 1, 00:11:28.913 "num_base_bdevs_operational": 2, 00:11:28.913 "base_bdevs_list": [ 00:11:28.913 { 00:11:28.913 "name": "BaseBdev1", 00:11:28.913 "uuid": "1ac32e26-b25e-44a3-adbf-49a792b76b4e", 00:11:28.913 "is_configured": true, 00:11:28.913 "data_offset": 0, 00:11:28.913 "data_size": 65536 00:11:28.913 }, 00:11:28.913 { 00:11:28.913 "name": "BaseBdev2", 00:11:28.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.913 "is_configured": false, 00:11:28.913 "data_offset": 0, 00:11:28.913 "data_size": 0 00:11:28.913 } 00:11:28.913 ] 00:11:28.913 }' 00:11:28.913 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.913 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.478 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:29.478 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.478 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.478 [2024-11-20 07:08:26.503968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.479 [2024-11-20 07:08:26.504165] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.479 [2024-11-20 07:08:26.511997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.479 [2024-11-20 07:08:26.514419] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:29.479 [2024-11-20 07:08:26.514481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.479 "name": "Existed_Raid", 00:11:29.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.479 "strip_size_kb": 0, 00:11:29.479 "state": "configuring", 00:11:29.479 "raid_level": "raid1", 00:11:29.479 "superblock": false, 00:11:29.479 "num_base_bdevs": 2, 00:11:29.479 "num_base_bdevs_discovered": 1, 00:11:29.479 "num_base_bdevs_operational": 2, 00:11:29.479 "base_bdevs_list": [ 00:11:29.479 { 00:11:29.479 "name": "BaseBdev1", 00:11:29.479 "uuid": "1ac32e26-b25e-44a3-adbf-49a792b76b4e", 00:11:29.479 "is_configured": true, 00:11:29.479 "data_offset": 0, 00:11:29.479 "data_size": 65536 00:11:29.479 }, 00:11:29.479 { 00:11:29.479 "name": "BaseBdev2", 00:11:29.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.479 "is_configured": false, 00:11:29.479 "data_offset": 0, 00:11:29.479 "data_size": 0 00:11:29.479 } 00:11:29.479 ] 00:11:29.479 }' 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.479 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.737 07:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:29.737 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.737 07:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.737 [2024-11-20 07:08:27.021931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.737 [2024-11-20 07:08:27.022020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:29.737 [2024-11-20 07:08:27.022033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:29.737 [2024-11-20 07:08:27.022378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:29.737 [2024-11-20 07:08:27.022575] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:29.737 [2024-11-20 07:08:27.022598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:29.737 [2024-11-20 07:08:27.022942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.737 BaseBdev2 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.737 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.737 [ 00:11:29.737 { 00:11:29.737 "name": "BaseBdev2", 00:11:29.737 "aliases": [ 00:11:29.737 "d6b8868d-d2bd-48c5-b4c4-78da60c6a29d" 00:11:29.737 ], 00:11:29.737 "product_name": "Malloc disk", 00:11:29.737 "block_size": 512, 00:11:29.737 "num_blocks": 65536, 00:11:29.737 "uuid": "d6b8868d-d2bd-48c5-b4c4-78da60c6a29d", 00:11:29.737 "assigned_rate_limits": { 00:11:29.737 "rw_ios_per_sec": 0, 00:11:29.737 "rw_mbytes_per_sec": 0, 00:11:29.737 "r_mbytes_per_sec": 0, 00:11:29.737 "w_mbytes_per_sec": 0 00:11:29.737 }, 00:11:29.737 "claimed": true, 00:11:29.737 "claim_type": "exclusive_write", 00:11:29.737 "zoned": false, 00:11:29.737 "supported_io_types": { 00:11:29.737 "read": true, 00:11:29.737 "write": true, 00:11:29.737 "unmap": true, 00:11:29.737 "flush": true, 00:11:29.737 "reset": true, 00:11:29.737 "nvme_admin": false, 00:11:29.737 "nvme_io": false, 00:11:29.737 "nvme_io_md": false, 00:11:29.737 "write_zeroes": true, 00:11:29.737 "zcopy": true, 00:11:29.737 "get_zone_info": false, 00:11:29.737 "zone_management": false, 00:11:29.737 "zone_append": false, 00:11:29.737 "compare": false, 00:11:29.737 "compare_and_write": false, 00:11:29.737 "abort": true, 00:11:29.737 "seek_hole": false, 00:11:29.737 "seek_data": false, 00:11:29.737 "copy": true, 00:11:29.737 "nvme_iov_md": false 00:11:29.737 }, 00:11:29.737 "memory_domains": [ 00:11:29.737 { 00:11:29.737 "dma_device_id": "system", 00:11:29.737 "dma_device_type": 1 00:11:29.737 }, 00:11:29.737 { 00:11:29.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.737 "dma_device_type": 2 00:11:29.737 } 00:11:29.737 ], 00:11:29.737 "driver_specific": {} 00:11:29.995 } 00:11:29.995 ] 00:11:29.995 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.996 "name": "Existed_Raid", 00:11:29.996 "uuid": "31f67c69-c7a8-4c2e-94c2-dc320f45838a", 00:11:29.996 "strip_size_kb": 0, 00:11:29.996 "state": "online", 00:11:29.996 "raid_level": "raid1", 00:11:29.996 "superblock": false, 00:11:29.996 "num_base_bdevs": 2, 00:11:29.996 "num_base_bdevs_discovered": 2, 00:11:29.996 "num_base_bdevs_operational": 2, 00:11:29.996 "base_bdevs_list": [ 00:11:29.996 { 00:11:29.996 "name": "BaseBdev1", 00:11:29.996 "uuid": "1ac32e26-b25e-44a3-adbf-49a792b76b4e", 00:11:29.996 "is_configured": true, 00:11:29.996 "data_offset": 0, 00:11:29.996 "data_size": 65536 00:11:29.996 }, 00:11:29.996 { 00:11:29.996 "name": "BaseBdev2", 00:11:29.996 "uuid": "d6b8868d-d2bd-48c5-b4c4-78da60c6a29d", 00:11:29.996 "is_configured": true, 00:11:29.996 "data_offset": 0, 00:11:29.996 "data_size": 65536 00:11:29.996 } 00:11:29.996 ] 00:11:29.996 }' 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.996 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.562 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.562 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.562 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.562 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.562 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.562 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.562 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.562 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.562 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.562 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.562 [2024-11-20 07:08:27.598548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.562 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.562 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.562 "name": "Existed_Raid", 00:11:30.562 "aliases": [ 00:11:30.562 "31f67c69-c7a8-4c2e-94c2-dc320f45838a" 00:11:30.562 ], 00:11:30.562 "product_name": "Raid Volume", 00:11:30.562 "block_size": 512, 00:11:30.562 "num_blocks": 65536, 00:11:30.562 "uuid": "31f67c69-c7a8-4c2e-94c2-dc320f45838a", 00:11:30.562 "assigned_rate_limits": { 00:11:30.562 "rw_ios_per_sec": 0, 00:11:30.562 "rw_mbytes_per_sec": 0, 00:11:30.562 "r_mbytes_per_sec": 0, 00:11:30.562 "w_mbytes_per_sec": 0 00:11:30.562 }, 00:11:30.562 "claimed": false, 00:11:30.562 "zoned": false, 00:11:30.562 "supported_io_types": { 00:11:30.562 "read": true, 00:11:30.562 "write": true, 00:11:30.562 "unmap": false, 00:11:30.562 "flush": false, 00:11:30.562 "reset": true, 00:11:30.562 "nvme_admin": false, 00:11:30.562 "nvme_io": false, 00:11:30.562 "nvme_io_md": false, 00:11:30.562 "write_zeroes": true, 00:11:30.562 "zcopy": false, 00:11:30.562 "get_zone_info": false, 00:11:30.562 "zone_management": false, 00:11:30.562 "zone_append": false, 00:11:30.562 "compare": false, 00:11:30.562 "compare_and_write": false, 00:11:30.562 "abort": false, 00:11:30.562 "seek_hole": false, 00:11:30.562 "seek_data": false, 00:11:30.562 "copy": false, 00:11:30.562 "nvme_iov_md": false 00:11:30.562 }, 00:11:30.562 "memory_domains": [ 00:11:30.562 { 00:11:30.562 "dma_device_id": "system", 00:11:30.562 "dma_device_type": 1 00:11:30.562 }, 00:11:30.562 { 00:11:30.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.562 "dma_device_type": 2 00:11:30.562 }, 00:11:30.562 { 00:11:30.562 "dma_device_id": "system", 00:11:30.562 "dma_device_type": 1 00:11:30.562 }, 00:11:30.562 { 00:11:30.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.562 "dma_device_type": 2 00:11:30.562 } 00:11:30.562 ], 00:11:30.562 "driver_specific": { 00:11:30.562 "raid": { 00:11:30.562 "uuid": "31f67c69-c7a8-4c2e-94c2-dc320f45838a", 00:11:30.562 "strip_size_kb": 0, 00:11:30.562 "state": "online", 00:11:30.562 "raid_level": "raid1", 00:11:30.562 "superblock": false, 00:11:30.562 "num_base_bdevs": 2, 00:11:30.562 "num_base_bdevs_discovered": 2, 00:11:30.562 "num_base_bdevs_operational": 2, 00:11:30.562 "base_bdevs_list": [ 00:11:30.562 { 00:11:30.562 "name": "BaseBdev1", 00:11:30.562 "uuid": "1ac32e26-b25e-44a3-adbf-49a792b76b4e", 00:11:30.562 "is_configured": true, 00:11:30.562 "data_offset": 0, 00:11:30.562 "data_size": 65536 00:11:30.562 }, 00:11:30.563 { 00:11:30.563 "name": "BaseBdev2", 00:11:30.563 "uuid": "d6b8868d-d2bd-48c5-b4c4-78da60c6a29d", 00:11:30.563 "is_configured": true, 00:11:30.563 "data_offset": 0, 00:11:30.563 "data_size": 65536 00:11:30.563 } 00:11:30.563 ] 00:11:30.563 } 00:11:30.563 } 00:11:30.563 }' 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:30.563 BaseBdev2' 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.563 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.821 [2024-11-20 07:08:27.898429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.821 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.822 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.822 07:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.822 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.822 07:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.822 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.822 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.822 "name": "Existed_Raid", 00:11:30.822 "uuid": "31f67c69-c7a8-4c2e-94c2-dc320f45838a", 00:11:30.822 "strip_size_kb": 0, 00:11:30.822 "state": "online", 00:11:30.822 "raid_level": "raid1", 00:11:30.822 "superblock": false, 00:11:30.822 "num_base_bdevs": 2, 00:11:30.822 "num_base_bdevs_discovered": 1, 00:11:30.822 "num_base_bdevs_operational": 1, 00:11:30.822 "base_bdevs_list": [ 00:11:30.822 { 00:11:30.822 "name": null, 00:11:30.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.822 "is_configured": false, 00:11:30.822 "data_offset": 0, 00:11:30.822 "data_size": 65536 00:11:30.822 }, 00:11:30.822 { 00:11:30.822 "name": "BaseBdev2", 00:11:30.822 "uuid": "d6b8868d-d2bd-48c5-b4c4-78da60c6a29d", 00:11:30.822 "is_configured": true, 00:11:30.822 "data_offset": 0, 00:11:30.822 "data_size": 65536 00:11:30.822 } 00:11:30.822 ] 00:11:30.822 }' 00:11:30.822 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.822 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.388 [2024-11-20 07:08:28.585500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:31.388 [2024-11-20 07:08:28.585753] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:31.388 [2024-11-20 07:08:28.674191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:31.388 [2024-11-20 07:08:28.674450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:31.388 [2024-11-20 07:08:28.674659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.388 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62592 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62592 ']' 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62592 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62592 00:11:31.647 killing process with pid 62592 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62592' 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62592 00:11:31.647 [2024-11-20 07:08:28.760818] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:31.647 07:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62592 00:11:31.647 [2024-11-20 07:08:28.775943] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:32.602 00:11:32.602 real 0m5.534s 00:11:32.602 user 0m8.391s 00:11:32.602 sys 0m0.760s 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.602 ************************************ 00:11:32.602 END TEST raid_state_function_test 00:11:32.602 ************************************ 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.602 07:08:29 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:11:32.602 07:08:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:32.602 07:08:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.602 07:08:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.602 ************************************ 00:11:32.602 START TEST raid_state_function_test_sb 00:11:32.602 ************************************ 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:32.602 Process raid pid: 62851 00:11:32.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:32.602 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62851 00:11:32.603 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62851' 00:11:32.603 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62851 00:11:32.603 07:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62851 ']' 00:11:32.603 07:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:32.603 07:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.603 07:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.603 07:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.603 07:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.603 07:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.860 [2024-11-20 07:08:29.988955] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:11:32.860 [2024-11-20 07:08:29.989456] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:32.860 [2024-11-20 07:08:30.176659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.118 [2024-11-20 07:08:30.306227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.376 [2024-11-20 07:08:30.504081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.376 [2024-11-20 07:08:30.504267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.943 [2024-11-20 07:08:30.994285] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:33.943 [2024-11-20 07:08:30.994557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:33.943 [2024-11-20 07:08:30.994587] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:33.943 [2024-11-20 07:08:30.994606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.943 07:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.943 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.943 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.943 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.943 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.943 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.943 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.943 "name": "Existed_Raid", 00:11:33.943 "uuid": "cd558e1d-05c6-4620-a3f3-821636abe485", 00:11:33.943 "strip_size_kb": 0, 00:11:33.943 "state": "configuring", 00:11:33.943 "raid_level": "raid1", 00:11:33.943 "superblock": true, 00:11:33.943 "num_base_bdevs": 2, 00:11:33.943 "num_base_bdevs_discovered": 0, 00:11:33.943 "num_base_bdevs_operational": 2, 00:11:33.943 "base_bdevs_list": [ 00:11:33.943 { 00:11:33.943 "name": "BaseBdev1", 00:11:33.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.943 "is_configured": false, 00:11:33.943 "data_offset": 0, 00:11:33.943 "data_size": 0 00:11:33.943 }, 00:11:33.943 { 00:11:33.943 "name": "BaseBdev2", 00:11:33.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.943 "is_configured": false, 00:11:33.943 "data_offset": 0, 00:11:33.943 "data_size": 0 00:11:33.943 } 00:11:33.943 ] 00:11:33.943 }' 00:11:33.943 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.943 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.257 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.257 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.257 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.257 [2024-11-20 07:08:31.462339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.257 [2024-11-20 07:08:31.462378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:34.257 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.257 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:34.257 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.257 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.257 [2024-11-20 07:08:31.470333] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.257 [2024-11-20 07:08:31.470402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.257 [2024-11-20 07:08:31.470416] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.257 [2024-11-20 07:08:31.470435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.257 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.257 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:34.257 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.257 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.257 [2024-11-20 07:08:31.516549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.257 BaseBdev1 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.258 [ 00:11:34.258 { 00:11:34.258 "name": "BaseBdev1", 00:11:34.258 "aliases": [ 00:11:34.258 "a732b6bc-d6e7-47a1-88db-cbd72a65cdfb" 00:11:34.258 ], 00:11:34.258 "product_name": "Malloc disk", 00:11:34.258 "block_size": 512, 00:11:34.258 "num_blocks": 65536, 00:11:34.258 "uuid": "a732b6bc-d6e7-47a1-88db-cbd72a65cdfb", 00:11:34.258 "assigned_rate_limits": { 00:11:34.258 "rw_ios_per_sec": 0, 00:11:34.258 "rw_mbytes_per_sec": 0, 00:11:34.258 "r_mbytes_per_sec": 0, 00:11:34.258 "w_mbytes_per_sec": 0 00:11:34.258 }, 00:11:34.258 "claimed": true, 00:11:34.258 "claim_type": "exclusive_write", 00:11:34.258 "zoned": false, 00:11:34.258 "supported_io_types": { 00:11:34.258 "read": true, 00:11:34.258 "write": true, 00:11:34.258 "unmap": true, 00:11:34.258 "flush": true, 00:11:34.258 "reset": true, 00:11:34.258 "nvme_admin": false, 00:11:34.258 "nvme_io": false, 00:11:34.258 "nvme_io_md": false, 00:11:34.258 "write_zeroes": true, 00:11:34.258 "zcopy": true, 00:11:34.258 "get_zone_info": false, 00:11:34.258 "zone_management": false, 00:11:34.258 "zone_append": false, 00:11:34.258 "compare": false, 00:11:34.258 "compare_and_write": false, 00:11:34.258 "abort": true, 00:11:34.258 "seek_hole": false, 00:11:34.258 "seek_data": false, 00:11:34.258 "copy": true, 00:11:34.258 "nvme_iov_md": false 00:11:34.258 }, 00:11:34.258 "memory_domains": [ 00:11:34.258 { 00:11:34.258 "dma_device_id": "system", 00:11:34.258 "dma_device_type": 1 00:11:34.258 }, 00:11:34.258 { 00:11:34.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.258 "dma_device_type": 2 00:11:34.258 } 00:11:34.258 ], 00:11:34.258 "driver_specific": {} 00:11:34.258 } 00:11:34.258 ] 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.258 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.516 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.516 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.516 "name": "Existed_Raid", 00:11:34.516 "uuid": "c5d78d1b-7d2d-4602-809a-2ad792f080ae", 00:11:34.516 "strip_size_kb": 0, 00:11:34.516 "state": "configuring", 00:11:34.516 "raid_level": "raid1", 00:11:34.516 "superblock": true, 00:11:34.516 "num_base_bdevs": 2, 00:11:34.516 "num_base_bdevs_discovered": 1, 00:11:34.516 "num_base_bdevs_operational": 2, 00:11:34.516 "base_bdevs_list": [ 00:11:34.516 { 00:11:34.516 "name": "BaseBdev1", 00:11:34.516 "uuid": "a732b6bc-d6e7-47a1-88db-cbd72a65cdfb", 00:11:34.516 "is_configured": true, 00:11:34.516 "data_offset": 2048, 00:11:34.516 "data_size": 63488 00:11:34.516 }, 00:11:34.516 { 00:11:34.516 "name": "BaseBdev2", 00:11:34.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.516 "is_configured": false, 00:11:34.516 "data_offset": 0, 00:11:34.516 "data_size": 0 00:11:34.516 } 00:11:34.516 ] 00:11:34.516 }' 00:11:34.516 07:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.516 07:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.775 [2024-11-20 07:08:32.064725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.775 [2024-11-20 07:08:32.064799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.775 [2024-11-20 07:08:32.072751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:34.775 [2024-11-20 07:08:32.075330] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.775 [2024-11-20 07:08:32.075400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.775 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.776 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.776 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:34.776 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.776 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.776 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.776 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.776 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.776 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.776 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.776 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.034 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.034 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.034 "name": "Existed_Raid", 00:11:35.034 "uuid": "eef6f05b-b370-45cd-8334-b9fa7fe20606", 00:11:35.034 "strip_size_kb": 0, 00:11:35.034 "state": "configuring", 00:11:35.034 "raid_level": "raid1", 00:11:35.034 "superblock": true, 00:11:35.034 "num_base_bdevs": 2, 00:11:35.034 "num_base_bdevs_discovered": 1, 00:11:35.034 "num_base_bdevs_operational": 2, 00:11:35.034 "base_bdevs_list": [ 00:11:35.034 { 00:11:35.034 "name": "BaseBdev1", 00:11:35.034 "uuid": "a732b6bc-d6e7-47a1-88db-cbd72a65cdfb", 00:11:35.034 "is_configured": true, 00:11:35.034 "data_offset": 2048, 00:11:35.034 "data_size": 63488 00:11:35.034 }, 00:11:35.034 { 00:11:35.034 "name": "BaseBdev2", 00:11:35.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.034 "is_configured": false, 00:11:35.034 "data_offset": 0, 00:11:35.034 "data_size": 0 00:11:35.034 } 00:11:35.034 ] 00:11:35.034 }' 00:11:35.034 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.034 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.292 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.292 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.292 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.552 [2024-11-20 07:08:32.647781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.552 [2024-11-20 07:08:32.648280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:35.552 BaseBdev2 00:11:35.552 [2024-11-20 07:08:32.648430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:35.552 [2024-11-20 07:08:32.648774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:35.552 [2024-11-20 07:08:32.648990] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:35.552 [2024-11-20 07:08:32.649013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:35.552 [2024-11-20 07:08:32.649186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.552 [ 00:11:35.552 { 00:11:35.552 "name": "BaseBdev2", 00:11:35.552 "aliases": [ 00:11:35.552 "535f2d20-e3d6-4594-846e-ce2895686868" 00:11:35.552 ], 00:11:35.552 "product_name": "Malloc disk", 00:11:35.552 "block_size": 512, 00:11:35.552 "num_blocks": 65536, 00:11:35.552 "uuid": "535f2d20-e3d6-4594-846e-ce2895686868", 00:11:35.552 "assigned_rate_limits": { 00:11:35.552 "rw_ios_per_sec": 0, 00:11:35.552 "rw_mbytes_per_sec": 0, 00:11:35.552 "r_mbytes_per_sec": 0, 00:11:35.552 "w_mbytes_per_sec": 0 00:11:35.552 }, 00:11:35.552 "claimed": true, 00:11:35.552 "claim_type": "exclusive_write", 00:11:35.552 "zoned": false, 00:11:35.552 "supported_io_types": { 00:11:35.552 "read": true, 00:11:35.552 "write": true, 00:11:35.552 "unmap": true, 00:11:35.552 "flush": true, 00:11:35.552 "reset": true, 00:11:35.552 "nvme_admin": false, 00:11:35.552 "nvme_io": false, 00:11:35.552 "nvme_io_md": false, 00:11:35.552 "write_zeroes": true, 00:11:35.552 "zcopy": true, 00:11:35.552 "get_zone_info": false, 00:11:35.552 "zone_management": false, 00:11:35.552 "zone_append": false, 00:11:35.552 "compare": false, 00:11:35.552 "compare_and_write": false, 00:11:35.552 "abort": true, 00:11:35.552 "seek_hole": false, 00:11:35.552 "seek_data": false, 00:11:35.552 "copy": true, 00:11:35.552 "nvme_iov_md": false 00:11:35.552 }, 00:11:35.552 "memory_domains": [ 00:11:35.552 { 00:11:35.552 "dma_device_id": "system", 00:11:35.552 "dma_device_type": 1 00:11:35.552 }, 00:11:35.552 { 00:11:35.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.552 "dma_device_type": 2 00:11:35.552 } 00:11:35.552 ], 00:11:35.552 "driver_specific": {} 00:11:35.552 } 00:11:35.552 ] 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.552 "name": "Existed_Raid", 00:11:35.552 "uuid": "eef6f05b-b370-45cd-8334-b9fa7fe20606", 00:11:35.552 "strip_size_kb": 0, 00:11:35.552 "state": "online", 00:11:35.552 "raid_level": "raid1", 00:11:35.552 "superblock": true, 00:11:35.552 "num_base_bdevs": 2, 00:11:35.552 "num_base_bdevs_discovered": 2, 00:11:35.552 "num_base_bdevs_operational": 2, 00:11:35.552 "base_bdevs_list": [ 00:11:35.552 { 00:11:35.552 "name": "BaseBdev1", 00:11:35.552 "uuid": "a732b6bc-d6e7-47a1-88db-cbd72a65cdfb", 00:11:35.552 "is_configured": true, 00:11:35.552 "data_offset": 2048, 00:11:35.552 "data_size": 63488 00:11:35.552 }, 00:11:35.552 { 00:11:35.552 "name": "BaseBdev2", 00:11:35.552 "uuid": "535f2d20-e3d6-4594-846e-ce2895686868", 00:11:35.552 "is_configured": true, 00:11:35.552 "data_offset": 2048, 00:11:35.552 "data_size": 63488 00:11:35.552 } 00:11:35.552 ] 00:11:35.552 }' 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.552 07:08:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.119 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:36.119 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:36.119 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.119 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.119 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.119 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.119 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:36.119 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.119 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.119 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.119 [2024-11-20 07:08:33.236352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.119 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.119 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.119 "name": "Existed_Raid", 00:11:36.119 "aliases": [ 00:11:36.119 "eef6f05b-b370-45cd-8334-b9fa7fe20606" 00:11:36.119 ], 00:11:36.119 "product_name": "Raid Volume", 00:11:36.119 "block_size": 512, 00:11:36.119 "num_blocks": 63488, 00:11:36.119 "uuid": "eef6f05b-b370-45cd-8334-b9fa7fe20606", 00:11:36.119 "assigned_rate_limits": { 00:11:36.120 "rw_ios_per_sec": 0, 00:11:36.120 "rw_mbytes_per_sec": 0, 00:11:36.120 "r_mbytes_per_sec": 0, 00:11:36.120 "w_mbytes_per_sec": 0 00:11:36.120 }, 00:11:36.120 "claimed": false, 00:11:36.120 "zoned": false, 00:11:36.120 "supported_io_types": { 00:11:36.120 "read": true, 00:11:36.120 "write": true, 00:11:36.120 "unmap": false, 00:11:36.120 "flush": false, 00:11:36.120 "reset": true, 00:11:36.120 "nvme_admin": false, 00:11:36.120 "nvme_io": false, 00:11:36.120 "nvme_io_md": false, 00:11:36.120 "write_zeroes": true, 00:11:36.120 "zcopy": false, 00:11:36.120 "get_zone_info": false, 00:11:36.120 "zone_management": false, 00:11:36.120 "zone_append": false, 00:11:36.120 "compare": false, 00:11:36.120 "compare_and_write": false, 00:11:36.120 "abort": false, 00:11:36.120 "seek_hole": false, 00:11:36.120 "seek_data": false, 00:11:36.120 "copy": false, 00:11:36.120 "nvme_iov_md": false 00:11:36.120 }, 00:11:36.120 "memory_domains": [ 00:11:36.120 { 00:11:36.120 "dma_device_id": "system", 00:11:36.120 "dma_device_type": 1 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.120 "dma_device_type": 2 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "dma_device_id": "system", 00:11:36.120 "dma_device_type": 1 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.120 "dma_device_type": 2 00:11:36.120 } 00:11:36.120 ], 00:11:36.120 "driver_specific": { 00:11:36.120 "raid": { 00:11:36.120 "uuid": "eef6f05b-b370-45cd-8334-b9fa7fe20606", 00:11:36.120 "strip_size_kb": 0, 00:11:36.120 "state": "online", 00:11:36.120 "raid_level": "raid1", 00:11:36.120 "superblock": true, 00:11:36.120 "num_base_bdevs": 2, 00:11:36.120 "num_base_bdevs_discovered": 2, 00:11:36.120 "num_base_bdevs_operational": 2, 00:11:36.120 "base_bdevs_list": [ 00:11:36.120 { 00:11:36.120 "name": "BaseBdev1", 00:11:36.120 "uuid": "a732b6bc-d6e7-47a1-88db-cbd72a65cdfb", 00:11:36.120 "is_configured": true, 00:11:36.120 "data_offset": 2048, 00:11:36.120 "data_size": 63488 00:11:36.120 }, 00:11:36.120 { 00:11:36.120 "name": "BaseBdev2", 00:11:36.120 "uuid": "535f2d20-e3d6-4594-846e-ce2895686868", 00:11:36.120 "is_configured": true, 00:11:36.120 "data_offset": 2048, 00:11:36.120 "data_size": 63488 00:11:36.120 } 00:11:36.120 ] 00:11:36.120 } 00:11:36.120 } 00:11:36.120 }' 00:11:36.120 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.120 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:36.120 BaseBdev2' 00:11:36.120 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.120 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.120 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.120 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:36.120 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.120 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.120 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.120 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.379 [2024-11-20 07:08:33.516183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.379 "name": "Existed_Raid", 00:11:36.379 "uuid": "eef6f05b-b370-45cd-8334-b9fa7fe20606", 00:11:36.379 "strip_size_kb": 0, 00:11:36.379 "state": "online", 00:11:36.379 "raid_level": "raid1", 00:11:36.379 "superblock": true, 00:11:36.379 "num_base_bdevs": 2, 00:11:36.379 "num_base_bdevs_discovered": 1, 00:11:36.379 "num_base_bdevs_operational": 1, 00:11:36.379 "base_bdevs_list": [ 00:11:36.379 { 00:11:36.379 "name": null, 00:11:36.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.379 "is_configured": false, 00:11:36.379 "data_offset": 0, 00:11:36.379 "data_size": 63488 00:11:36.379 }, 00:11:36.379 { 00:11:36.379 "name": "BaseBdev2", 00:11:36.379 "uuid": "535f2d20-e3d6-4594-846e-ce2895686868", 00:11:36.379 "is_configured": true, 00:11:36.379 "data_offset": 2048, 00:11:36.379 "data_size": 63488 00:11:36.379 } 00:11:36.379 ] 00:11:36.379 }' 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.379 07:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.946 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:36.946 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.946 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.947 [2024-11-20 07:08:34.166954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.947 [2024-11-20 07:08:34.167089] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:36.947 [2024-11-20 07:08:34.258834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.947 [2024-11-20 07:08:34.258935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.947 [2024-11-20 07:08:34.258957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.947 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62851 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62851 ']' 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62851 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62851 00:11:37.205 killing process with pid 62851 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62851' 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62851 00:11:37.205 [2024-11-20 07:08:34.350207] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:37.205 07:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62851 00:11:37.205 [2024-11-20 07:08:34.365853] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.140 07:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:38.140 00:11:38.140 real 0m5.549s 00:11:38.140 user 0m8.403s 00:11:38.140 sys 0m0.766s 00:11:38.140 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.140 07:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:38.140 ************************************ 00:11:38.140 END TEST raid_state_function_test_sb 00:11:38.140 ************************************ 00:11:38.399 07:08:35 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:11:38.399 07:08:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:38.399 07:08:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.399 07:08:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.399 ************************************ 00:11:38.399 START TEST raid_superblock_test 00:11:38.399 ************************************ 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:38.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63103 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63103 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63103 ']' 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.399 07:08:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.399 [2024-11-20 07:08:35.585136] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:11:38.399 [2024-11-20 07:08:35.585326] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63103 ] 00:11:38.657 [2024-11-20 07:08:35.770264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.657 [2024-11-20 07:08:35.902133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.915 [2024-11-20 07:08:36.110303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.915 [2024-11-20 07:08:36.110378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.481 malloc1 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.481 [2024-11-20 07:08:36.614289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:39.481 [2024-11-20 07:08:36.614573] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.481 [2024-11-20 07:08:36.614656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:39.481 [2024-11-20 07:08:36.614783] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.481 [2024-11-20 07:08:36.617739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.481 [2024-11-20 07:08:36.617927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:39.481 pt1 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.481 malloc2 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.481 [2024-11-20 07:08:36.671385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:39.481 [2024-11-20 07:08:36.671609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.481 [2024-11-20 07:08:36.671688] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:39.481 [2024-11-20 07:08:36.671803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.481 [2024-11-20 07:08:36.674669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.481 [2024-11-20 07:08:36.674823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:39.481 pt2 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.481 [2024-11-20 07:08:36.683576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:39.481 [2024-11-20 07:08:36.686199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:39.481 [2024-11-20 07:08:36.686558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:39.481 [2024-11-20 07:08:36.686681] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:39.481 [2024-11-20 07:08:36.687051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:11:39.481 [2024-11-20 07:08:36.687376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:39.481 [2024-11-20 07:08:36.687528] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:39.481 [2024-11-20 07:08:36.687957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.481 "name": "raid_bdev1", 00:11:39.481 "uuid": "0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d", 00:11:39.481 "strip_size_kb": 0, 00:11:39.481 "state": "online", 00:11:39.481 "raid_level": "raid1", 00:11:39.481 "superblock": true, 00:11:39.481 "num_base_bdevs": 2, 00:11:39.481 "num_base_bdevs_discovered": 2, 00:11:39.481 "num_base_bdevs_operational": 2, 00:11:39.481 "base_bdevs_list": [ 00:11:39.481 { 00:11:39.481 "name": "pt1", 00:11:39.481 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.481 "is_configured": true, 00:11:39.481 "data_offset": 2048, 00:11:39.481 "data_size": 63488 00:11:39.481 }, 00:11:39.481 { 00:11:39.481 "name": "pt2", 00:11:39.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.481 "is_configured": true, 00:11:39.481 "data_offset": 2048, 00:11:39.481 "data_size": 63488 00:11:39.481 } 00:11:39.481 ] 00:11:39.481 }' 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.481 07:08:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.048 [2024-11-20 07:08:37.212458] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:40.048 "name": "raid_bdev1", 00:11:40.048 "aliases": [ 00:11:40.048 "0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d" 00:11:40.048 ], 00:11:40.048 "product_name": "Raid Volume", 00:11:40.048 "block_size": 512, 00:11:40.048 "num_blocks": 63488, 00:11:40.048 "uuid": "0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d", 00:11:40.048 "assigned_rate_limits": { 00:11:40.048 "rw_ios_per_sec": 0, 00:11:40.048 "rw_mbytes_per_sec": 0, 00:11:40.048 "r_mbytes_per_sec": 0, 00:11:40.048 "w_mbytes_per_sec": 0 00:11:40.048 }, 00:11:40.048 "claimed": false, 00:11:40.048 "zoned": false, 00:11:40.048 "supported_io_types": { 00:11:40.048 "read": true, 00:11:40.048 "write": true, 00:11:40.048 "unmap": false, 00:11:40.048 "flush": false, 00:11:40.048 "reset": true, 00:11:40.048 "nvme_admin": false, 00:11:40.048 "nvme_io": false, 00:11:40.048 "nvme_io_md": false, 00:11:40.048 "write_zeroes": true, 00:11:40.048 "zcopy": false, 00:11:40.048 "get_zone_info": false, 00:11:40.048 "zone_management": false, 00:11:40.048 "zone_append": false, 00:11:40.048 "compare": false, 00:11:40.048 "compare_and_write": false, 00:11:40.048 "abort": false, 00:11:40.048 "seek_hole": false, 00:11:40.048 "seek_data": false, 00:11:40.048 "copy": false, 00:11:40.048 "nvme_iov_md": false 00:11:40.048 }, 00:11:40.048 "memory_domains": [ 00:11:40.048 { 00:11:40.048 "dma_device_id": "system", 00:11:40.048 "dma_device_type": 1 00:11:40.048 }, 00:11:40.048 { 00:11:40.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.048 "dma_device_type": 2 00:11:40.048 }, 00:11:40.048 { 00:11:40.048 "dma_device_id": "system", 00:11:40.048 "dma_device_type": 1 00:11:40.048 }, 00:11:40.048 { 00:11:40.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.048 "dma_device_type": 2 00:11:40.048 } 00:11:40.048 ], 00:11:40.048 "driver_specific": { 00:11:40.048 "raid": { 00:11:40.048 "uuid": "0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d", 00:11:40.048 "strip_size_kb": 0, 00:11:40.048 "state": "online", 00:11:40.048 "raid_level": "raid1", 00:11:40.048 "superblock": true, 00:11:40.048 "num_base_bdevs": 2, 00:11:40.048 "num_base_bdevs_discovered": 2, 00:11:40.048 "num_base_bdevs_operational": 2, 00:11:40.048 "base_bdevs_list": [ 00:11:40.048 { 00:11:40.048 "name": "pt1", 00:11:40.048 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.048 "is_configured": true, 00:11:40.048 "data_offset": 2048, 00:11:40.048 "data_size": 63488 00:11:40.048 }, 00:11:40.048 { 00:11:40.048 "name": "pt2", 00:11:40.048 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.048 "is_configured": true, 00:11:40.048 "data_offset": 2048, 00:11:40.048 "data_size": 63488 00:11:40.048 } 00:11:40.048 ] 00:11:40.048 } 00:11:40.048 } 00:11:40.048 }' 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:40.048 pt2' 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.048 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:40.307 [2024-11-20 07:08:37.452509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d ']' 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.307 [2024-11-20 07:08:37.500138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.307 [2024-11-20 07:08:37.500170] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:40.307 [2024-11-20 07:08:37.500290] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:40.307 [2024-11-20 07:08:37.500369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:40.307 [2024-11-20 07:08:37.500391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:40.307 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:40.566 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.566 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:40.566 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:40.566 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:11:40.566 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.566 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.567 [2024-11-20 07:08:37.632249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:40.567 [2024-11-20 07:08:37.635045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:40.567 [2024-11-20 07:08:37.635134] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:40.567 [2024-11-20 07:08:37.635211] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:40.567 [2024-11-20 07:08:37.635236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:40.567 [2024-11-20 07:08:37.635252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:40.567 request: 00:11:40.567 { 00:11:40.567 "name": "raid_bdev1", 00:11:40.567 "raid_level": "raid1", 00:11:40.567 "base_bdevs": [ 00:11:40.567 "malloc1", 00:11:40.567 "malloc2" 00:11:40.567 ], 00:11:40.567 "superblock": false, 00:11:40.567 "method": "bdev_raid_create", 00:11:40.567 "req_id": 1 00:11:40.567 } 00:11:40.567 Got JSON-RPC error response 00:11:40.567 response: 00:11:40.567 { 00:11:40.567 "code": -17, 00:11:40.567 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:40.567 } 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.567 [2024-11-20 07:08:37.700175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:40.567 [2024-11-20 07:08:37.700436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.567 [2024-11-20 07:08:37.700579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:40.567 [2024-11-20 07:08:37.700692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.567 [2024-11-20 07:08:37.703718] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.567 [2024-11-20 07:08:37.703927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:40.567 [2024-11-20 07:08:37.704143] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:40.567 [2024-11-20 07:08:37.704317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:40.567 pt1 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.567 "name": "raid_bdev1", 00:11:40.567 "uuid": "0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d", 00:11:40.567 "strip_size_kb": 0, 00:11:40.567 "state": "configuring", 00:11:40.567 "raid_level": "raid1", 00:11:40.567 "superblock": true, 00:11:40.567 "num_base_bdevs": 2, 00:11:40.567 "num_base_bdevs_discovered": 1, 00:11:40.567 "num_base_bdevs_operational": 2, 00:11:40.567 "base_bdevs_list": [ 00:11:40.567 { 00:11:40.567 "name": "pt1", 00:11:40.567 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.567 "is_configured": true, 00:11:40.567 "data_offset": 2048, 00:11:40.567 "data_size": 63488 00:11:40.567 }, 00:11:40.567 { 00:11:40.567 "name": null, 00:11:40.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.567 "is_configured": false, 00:11:40.567 "data_offset": 2048, 00:11:40.567 "data_size": 63488 00:11:40.567 } 00:11:40.567 ] 00:11:40.567 }' 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.567 07:08:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.225 [2024-11-20 07:08:38.216420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:41.225 [2024-11-20 07:08:38.216711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.225 [2024-11-20 07:08:38.216750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:41.225 [2024-11-20 07:08:38.216768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.225 [2024-11-20 07:08:38.217397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.225 [2024-11-20 07:08:38.217432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:41.225 [2024-11-20 07:08:38.217544] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:41.225 [2024-11-20 07:08:38.217579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:41.225 [2024-11-20 07:08:38.217750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:41.225 [2024-11-20 07:08:38.217778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:41.225 [2024-11-20 07:08:38.218098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:41.225 [2024-11-20 07:08:38.218296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:41.225 [2024-11-20 07:08:38.218319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:41.225 [2024-11-20 07:08:38.218488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.225 pt2 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.225 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.226 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.226 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.226 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.226 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.226 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.226 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.226 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.226 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.226 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.226 "name": "raid_bdev1", 00:11:41.226 "uuid": "0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d", 00:11:41.226 "strip_size_kb": 0, 00:11:41.226 "state": "online", 00:11:41.226 "raid_level": "raid1", 00:11:41.226 "superblock": true, 00:11:41.226 "num_base_bdevs": 2, 00:11:41.226 "num_base_bdevs_discovered": 2, 00:11:41.226 "num_base_bdevs_operational": 2, 00:11:41.226 "base_bdevs_list": [ 00:11:41.226 { 00:11:41.226 "name": "pt1", 00:11:41.226 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.226 "is_configured": true, 00:11:41.226 "data_offset": 2048, 00:11:41.226 "data_size": 63488 00:11:41.226 }, 00:11:41.226 { 00:11:41.226 "name": "pt2", 00:11:41.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.226 "is_configured": true, 00:11:41.226 "data_offset": 2048, 00:11:41.226 "data_size": 63488 00:11:41.226 } 00:11:41.226 ] 00:11:41.226 }' 00:11:41.226 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.226 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.485 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:41.485 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:41.485 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:41.485 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:41.485 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:41.485 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:41.485 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:41.485 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.485 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:41.485 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.485 [2024-11-20 07:08:38.756863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.485 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.743 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:41.743 "name": "raid_bdev1", 00:11:41.743 "aliases": [ 00:11:41.743 "0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d" 00:11:41.743 ], 00:11:41.743 "product_name": "Raid Volume", 00:11:41.743 "block_size": 512, 00:11:41.743 "num_blocks": 63488, 00:11:41.743 "uuid": "0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d", 00:11:41.743 "assigned_rate_limits": { 00:11:41.743 "rw_ios_per_sec": 0, 00:11:41.743 "rw_mbytes_per_sec": 0, 00:11:41.743 "r_mbytes_per_sec": 0, 00:11:41.743 "w_mbytes_per_sec": 0 00:11:41.743 }, 00:11:41.743 "claimed": false, 00:11:41.743 "zoned": false, 00:11:41.743 "supported_io_types": { 00:11:41.743 "read": true, 00:11:41.743 "write": true, 00:11:41.743 "unmap": false, 00:11:41.743 "flush": false, 00:11:41.744 "reset": true, 00:11:41.744 "nvme_admin": false, 00:11:41.744 "nvme_io": false, 00:11:41.744 "nvme_io_md": false, 00:11:41.744 "write_zeroes": true, 00:11:41.744 "zcopy": false, 00:11:41.744 "get_zone_info": false, 00:11:41.744 "zone_management": false, 00:11:41.744 "zone_append": false, 00:11:41.744 "compare": false, 00:11:41.744 "compare_and_write": false, 00:11:41.744 "abort": false, 00:11:41.744 "seek_hole": false, 00:11:41.744 "seek_data": false, 00:11:41.744 "copy": false, 00:11:41.744 "nvme_iov_md": false 00:11:41.744 }, 00:11:41.744 "memory_domains": [ 00:11:41.744 { 00:11:41.744 "dma_device_id": "system", 00:11:41.744 "dma_device_type": 1 00:11:41.744 }, 00:11:41.744 { 00:11:41.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.744 "dma_device_type": 2 00:11:41.744 }, 00:11:41.744 { 00:11:41.744 "dma_device_id": "system", 00:11:41.744 "dma_device_type": 1 00:11:41.744 }, 00:11:41.744 { 00:11:41.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.744 "dma_device_type": 2 00:11:41.744 } 00:11:41.744 ], 00:11:41.744 "driver_specific": { 00:11:41.744 "raid": { 00:11:41.744 "uuid": "0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d", 00:11:41.744 "strip_size_kb": 0, 00:11:41.744 "state": "online", 00:11:41.744 "raid_level": "raid1", 00:11:41.744 "superblock": true, 00:11:41.744 "num_base_bdevs": 2, 00:11:41.744 "num_base_bdevs_discovered": 2, 00:11:41.744 "num_base_bdevs_operational": 2, 00:11:41.744 "base_bdevs_list": [ 00:11:41.744 { 00:11:41.744 "name": "pt1", 00:11:41.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:41.744 "is_configured": true, 00:11:41.744 "data_offset": 2048, 00:11:41.744 "data_size": 63488 00:11:41.744 }, 00:11:41.744 { 00:11:41.744 "name": "pt2", 00:11:41.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.744 "is_configured": true, 00:11:41.744 "data_offset": 2048, 00:11:41.744 "data_size": 63488 00:11:41.744 } 00:11:41.744 ] 00:11:41.744 } 00:11:41.744 } 00:11:41.744 }' 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:41.744 pt2' 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.744 07:08:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.744 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:41.744 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:41.744 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:41.744 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:41.744 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.744 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.744 [2024-11-20 07:08:39.020994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:41.744 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d '!=' 0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d ']' 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.002 [2024-11-20 07:08:39.072728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.002 "name": "raid_bdev1", 00:11:42.002 "uuid": "0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d", 00:11:42.002 "strip_size_kb": 0, 00:11:42.002 "state": "online", 00:11:42.002 "raid_level": "raid1", 00:11:42.002 "superblock": true, 00:11:42.002 "num_base_bdevs": 2, 00:11:42.002 "num_base_bdevs_discovered": 1, 00:11:42.002 "num_base_bdevs_operational": 1, 00:11:42.002 "base_bdevs_list": [ 00:11:42.002 { 00:11:42.002 "name": null, 00:11:42.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.002 "is_configured": false, 00:11:42.002 "data_offset": 0, 00:11:42.002 "data_size": 63488 00:11:42.002 }, 00:11:42.002 { 00:11:42.002 "name": "pt2", 00:11:42.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.002 "is_configured": true, 00:11:42.002 "data_offset": 2048, 00:11:42.002 "data_size": 63488 00:11:42.002 } 00:11:42.002 ] 00:11:42.002 }' 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.002 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.262 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:42.262 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.262 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.262 [2024-11-20 07:08:39.572827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.262 [2024-11-20 07:08:39.572860] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.262 [2024-11-20 07:08:39.572995] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.262 [2024-11-20 07:08:39.573060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.262 [2024-11-20 07:08:39.573079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:42.262 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.520 [2024-11-20 07:08:39.644842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:42.520 [2024-11-20 07:08:39.644947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.520 [2024-11-20 07:08:39.644976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:42.520 [2024-11-20 07:08:39.644993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.520 [2024-11-20 07:08:39.647857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.520 [2024-11-20 07:08:39.647939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:42.520 [2024-11-20 07:08:39.648049] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:42.520 [2024-11-20 07:08:39.648110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:42.520 [2024-11-20 07:08:39.648237] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:42.520 [2024-11-20 07:08:39.648259] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:42.520 [2024-11-20 07:08:39.648550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:42.520 [2024-11-20 07:08:39.648752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:42.520 [2024-11-20 07:08:39.648769] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:42.520 [2024-11-20 07:08:39.649016] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.520 pt2 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.520 "name": "raid_bdev1", 00:11:42.520 "uuid": "0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d", 00:11:42.520 "strip_size_kb": 0, 00:11:42.520 "state": "online", 00:11:42.520 "raid_level": "raid1", 00:11:42.520 "superblock": true, 00:11:42.520 "num_base_bdevs": 2, 00:11:42.520 "num_base_bdevs_discovered": 1, 00:11:42.520 "num_base_bdevs_operational": 1, 00:11:42.520 "base_bdevs_list": [ 00:11:42.520 { 00:11:42.520 "name": null, 00:11:42.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.520 "is_configured": false, 00:11:42.520 "data_offset": 2048, 00:11:42.520 "data_size": 63488 00:11:42.520 }, 00:11:42.520 { 00:11:42.520 "name": "pt2", 00:11:42.520 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.520 "is_configured": true, 00:11:42.520 "data_offset": 2048, 00:11:42.520 "data_size": 63488 00:11:42.520 } 00:11:42.520 ] 00:11:42.520 }' 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.520 07:08:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.088 [2024-11-20 07:08:40.161061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.088 [2024-11-20 07:08:40.161234] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.088 [2024-11-20 07:08:40.161341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.088 [2024-11-20 07:08:40.161411] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.088 [2024-11-20 07:08:40.161426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.088 [2024-11-20 07:08:40.225105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:43.088 [2024-11-20 07:08:40.225178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.088 [2024-11-20 07:08:40.225208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:43.088 [2024-11-20 07:08:40.225233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.088 [2024-11-20 07:08:40.228172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.088 [2024-11-20 07:08:40.228219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:43.088 [2024-11-20 07:08:40.228326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:43.088 [2024-11-20 07:08:40.228382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:43.088 [2024-11-20 07:08:40.228567] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:43.088 [2024-11-20 07:08:40.228585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.088 [2024-11-20 07:08:40.228623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:43.088 [2024-11-20 07:08:40.228694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:43.088 [2024-11-20 07:08:40.228806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:43.088 [2024-11-20 07:08:40.228822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:43.088 [2024-11-20 07:08:40.229166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:43.088 [2024-11-20 07:08:40.229359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:43.088 [2024-11-20 07:08:40.229380] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:43.088 [2024-11-20 07:08:40.229605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.088 pt1 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.088 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.088 "name": "raid_bdev1", 00:11:43.088 "uuid": "0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d", 00:11:43.088 "strip_size_kb": 0, 00:11:43.088 "state": "online", 00:11:43.088 "raid_level": "raid1", 00:11:43.088 "superblock": true, 00:11:43.088 "num_base_bdevs": 2, 00:11:43.088 "num_base_bdevs_discovered": 1, 00:11:43.088 "num_base_bdevs_operational": 1, 00:11:43.088 "base_bdevs_list": [ 00:11:43.088 { 00:11:43.088 "name": null, 00:11:43.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.088 "is_configured": false, 00:11:43.088 "data_offset": 2048, 00:11:43.089 "data_size": 63488 00:11:43.089 }, 00:11:43.089 { 00:11:43.089 "name": "pt2", 00:11:43.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.089 "is_configured": true, 00:11:43.089 "data_offset": 2048, 00:11:43.089 "data_size": 63488 00:11:43.089 } 00:11:43.089 ] 00:11:43.089 }' 00:11:43.089 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.089 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:43.656 [2024-11-20 07:08:40.818069] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d '!=' 0e6ea91f-53a5-445d-9eb8-04d8c6a88b8d ']' 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63103 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63103 ']' 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63103 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63103 00:11:43.656 killing process with pid 63103 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63103' 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63103 00:11:43.656 [2024-11-20 07:08:40.903096] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.656 07:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63103 00:11:43.656 [2024-11-20 07:08:40.903200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.656 [2024-11-20 07:08:40.903261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.656 [2024-11-20 07:08:40.903282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:43.914 [2024-11-20 07:08:41.095865] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.847 ************************************ 00:11:44.847 END TEST raid_superblock_test 00:11:44.847 ************************************ 00:11:44.847 07:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:44.847 00:11:44.847 real 0m6.640s 00:11:44.847 user 0m10.509s 00:11:44.847 sys 0m0.956s 00:11:44.847 07:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.847 07:08:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.847 07:08:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:11:44.847 07:08:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:44.847 07:08:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.847 07:08:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.105 ************************************ 00:11:45.105 START TEST raid_read_error_test 00:11:45.105 ************************************ 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qyw30VgSLA 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63439 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63439 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63439 ']' 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.106 07:08:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.106 [2024-11-20 07:08:42.295994] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:11:45.106 [2024-11-20 07:08:42.296355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63439 ] 00:11:45.364 [2024-11-20 07:08:42.481677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.364 [2024-11-20 07:08:42.607353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.622 [2024-11-20 07:08:42.813244] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.622 [2024-11-20 07:08:42.813529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.190 BaseBdev1_malloc 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.190 true 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.190 [2024-11-20 07:08:43.321424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:46.190 [2024-11-20 07:08:43.321492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.190 [2024-11-20 07:08:43.321521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:46.190 [2024-11-20 07:08:43.321540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.190 [2024-11-20 07:08:43.324332] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.190 [2024-11-20 07:08:43.324384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:46.190 BaseBdev1 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.190 BaseBdev2_malloc 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.190 true 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.190 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.190 [2024-11-20 07:08:43.377369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:46.190 [2024-11-20 07:08:43.377581] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.190 [2024-11-20 07:08:43.377619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:46.190 [2024-11-20 07:08:43.377637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.191 [2024-11-20 07:08:43.380459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.191 [2024-11-20 07:08:43.380510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:46.191 BaseBdev2 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.191 [2024-11-20 07:08:43.385477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.191 [2024-11-20 07:08:43.388025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.191 [2024-11-20 07:08:43.388479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:46.191 [2024-11-20 07:08:43.388510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.191 [2024-11-20 07:08:43.388893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:46.191 [2024-11-20 07:08:43.389160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:46.191 [2024-11-20 07:08:43.389177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:46.191 [2024-11-20 07:08:43.389471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.191 "name": "raid_bdev1", 00:11:46.191 "uuid": "7eaa6b7a-7785-4dae-a590-47ad938dbb44", 00:11:46.191 "strip_size_kb": 0, 00:11:46.191 "state": "online", 00:11:46.191 "raid_level": "raid1", 00:11:46.191 "superblock": true, 00:11:46.191 "num_base_bdevs": 2, 00:11:46.191 "num_base_bdevs_discovered": 2, 00:11:46.191 "num_base_bdevs_operational": 2, 00:11:46.191 "base_bdevs_list": [ 00:11:46.191 { 00:11:46.191 "name": "BaseBdev1", 00:11:46.191 "uuid": "e0de6bc2-361a-5729-b565-6f81cd0fd184", 00:11:46.191 "is_configured": true, 00:11:46.191 "data_offset": 2048, 00:11:46.191 "data_size": 63488 00:11:46.191 }, 00:11:46.191 { 00:11:46.191 "name": "BaseBdev2", 00:11:46.191 "uuid": "369e0fbb-62c6-524d-b60e-b3ff0537a980", 00:11:46.191 "is_configured": true, 00:11:46.191 "data_offset": 2048, 00:11:46.191 "data_size": 63488 00:11:46.191 } 00:11:46.191 ] 00:11:46.191 }' 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.191 07:08:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.758 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:46.758 07:08:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:46.758 [2024-11-20 07:08:44.015032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.741 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.741 "name": "raid_bdev1", 00:11:47.741 "uuid": "7eaa6b7a-7785-4dae-a590-47ad938dbb44", 00:11:47.742 "strip_size_kb": 0, 00:11:47.742 "state": "online", 00:11:47.742 "raid_level": "raid1", 00:11:47.742 "superblock": true, 00:11:47.742 "num_base_bdevs": 2, 00:11:47.742 "num_base_bdevs_discovered": 2, 00:11:47.742 "num_base_bdevs_operational": 2, 00:11:47.742 "base_bdevs_list": [ 00:11:47.742 { 00:11:47.742 "name": "BaseBdev1", 00:11:47.742 "uuid": "e0de6bc2-361a-5729-b565-6f81cd0fd184", 00:11:47.742 "is_configured": true, 00:11:47.742 "data_offset": 2048, 00:11:47.742 "data_size": 63488 00:11:47.742 }, 00:11:47.742 { 00:11:47.742 "name": "BaseBdev2", 00:11:47.742 "uuid": "369e0fbb-62c6-524d-b60e-b3ff0537a980", 00:11:47.742 "is_configured": true, 00:11:47.742 "data_offset": 2048, 00:11:47.742 "data_size": 63488 00:11:47.742 } 00:11:47.742 ] 00:11:47.742 }' 00:11:47.742 07:08:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.742 07:08:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.308 07:08:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.308 07:08:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.308 07:08:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.308 [2024-11-20 07:08:45.389293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.308 [2024-11-20 07:08:45.389496] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.308 [2024-11-20 07:08:45.393073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.308 [2024-11-20 07:08:45.393253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.309 [2024-11-20 07:08:45.393466] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.309 [2024-11-20 07:08:45.393633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:48.309 { 00:11:48.309 "results": [ 00:11:48.309 { 00:11:48.309 "job": "raid_bdev1", 00:11:48.309 "core_mask": "0x1", 00:11:48.309 "workload": "randrw", 00:11:48.309 "percentage": 50, 00:11:48.309 "status": "finished", 00:11:48.309 "queue_depth": 1, 00:11:48.309 "io_size": 131072, 00:11:48.309 "runtime": 1.372017, 00:11:48.309 "iops": 12168.945428518742, 00:11:48.309 "mibps": 1521.1181785648428, 00:11:48.309 "io_failed": 0, 00:11:48.309 "io_timeout": 0, 00:11:48.309 "avg_latency_us": 78.0969360108028, 00:11:48.309 "min_latency_us": 40.02909090909091, 00:11:48.309 "max_latency_us": 1869.2654545454545 00:11:48.309 } 00:11:48.309 ], 00:11:48.309 "core_count": 1 00:11:48.309 } 00:11:48.309 07:08:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.309 07:08:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63439 00:11:48.309 07:08:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63439 ']' 00:11:48.309 07:08:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63439 00:11:48.309 07:08:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:48.309 07:08:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.309 07:08:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63439 00:11:48.309 killing process with pid 63439 00:11:48.309 07:08:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.309 07:08:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.309 07:08:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63439' 00:11:48.309 07:08:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63439 00:11:48.309 [2024-11-20 07:08:45.432959] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.309 07:08:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63439 00:11:48.309 [2024-11-20 07:08:45.557126] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.683 07:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qyw30VgSLA 00:11:49.683 07:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:49.683 07:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:49.683 ************************************ 00:11:49.683 END TEST raid_read_error_test 00:11:49.683 ************************************ 00:11:49.683 07:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:49.683 07:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:49.683 07:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:49.683 07:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:49.683 07:08:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:49.683 00:11:49.683 real 0m4.494s 00:11:49.683 user 0m5.609s 00:11:49.683 sys 0m0.554s 00:11:49.683 07:08:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.683 07:08:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.683 07:08:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:11:49.683 07:08:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:49.683 07:08:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.683 07:08:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.683 ************************************ 00:11:49.683 START TEST raid_write_error_test 00:11:49.683 ************************************ 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gm4xncbBxQ 00:11:49.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63584 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63584 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63584 ']' 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.683 07:08:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.683 [2024-11-20 07:08:46.820978] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:11:49.683 [2024-11-20 07:08:46.821158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63584 ] 00:11:49.683 [2024-11-20 07:08:46.996601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.942 [2024-11-20 07:08:47.129067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.200 [2024-11-20 07:08:47.337904] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.200 [2024-11-20 07:08:47.338020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.767 BaseBdev1_malloc 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.767 true 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.767 [2024-11-20 07:08:47.963524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:50.767 [2024-11-20 07:08:47.963598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.767 [2024-11-20 07:08:47.963627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:50.767 [2024-11-20 07:08:47.963645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.767 [2024-11-20 07:08:47.966590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.767 [2024-11-20 07:08:47.966643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:50.767 BaseBdev1 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.767 07:08:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.767 BaseBdev2_malloc 00:11:50.767 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.767 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:50.767 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.767 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.767 true 00:11:50.767 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.767 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:50.767 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.767 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.767 [2024-11-20 07:08:48.024443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:50.767 [2024-11-20 07:08:48.024757] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.767 [2024-11-20 07:08:48.024811] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:50.767 [2024-11-20 07:08:48.024847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.767 [2024-11-20 07:08:48.028023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.767 [2024-11-20 07:08:48.028089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:50.767 BaseBdev2 00:11:50.767 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.767 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:11:50.767 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.767 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.768 [2024-11-20 07:08:48.032497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.768 [2024-11-20 07:08:48.035186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.768 [2024-11-20 07:08:48.035440] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:50.768 [2024-11-20 07:08:48.035494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:50.768 [2024-11-20 07:08:48.035800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:50.768 [2024-11-20 07:08:48.036103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:50.768 [2024-11-20 07:08:48.036120] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:50.768 [2024-11-20 07:08:48.036481] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.768 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.026 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.026 "name": "raid_bdev1", 00:11:51.026 "uuid": "62f03ab9-b907-48f7-be66-799ebdcca14d", 00:11:51.026 "strip_size_kb": 0, 00:11:51.026 "state": "online", 00:11:51.026 "raid_level": "raid1", 00:11:51.026 "superblock": true, 00:11:51.026 "num_base_bdevs": 2, 00:11:51.026 "num_base_bdevs_discovered": 2, 00:11:51.026 "num_base_bdevs_operational": 2, 00:11:51.026 "base_bdevs_list": [ 00:11:51.026 { 00:11:51.026 "name": "BaseBdev1", 00:11:51.026 "uuid": "c81dd767-0ced-5859-a852-812fe63f2944", 00:11:51.026 "is_configured": true, 00:11:51.026 "data_offset": 2048, 00:11:51.026 "data_size": 63488 00:11:51.026 }, 00:11:51.026 { 00:11:51.026 "name": "BaseBdev2", 00:11:51.026 "uuid": "5ef3f697-0977-59ab-b6ed-b25cd384d772", 00:11:51.026 "is_configured": true, 00:11:51.026 "data_offset": 2048, 00:11:51.026 "data_size": 63488 00:11:51.026 } 00:11:51.026 ] 00:11:51.026 }' 00:11:51.026 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.026 07:08:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.284 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:51.284 07:08:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:51.543 [2024-11-20 07:08:48.694172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:52.478 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:52.478 07:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.478 07:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.478 [2024-11-20 07:08:49.575772] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:52.478 [2024-11-20 07:08:49.575931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:52.478 [2024-11-20 07:08:49.576158] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:11:52.478 07:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.479 "name": "raid_bdev1", 00:11:52.479 "uuid": "62f03ab9-b907-48f7-be66-799ebdcca14d", 00:11:52.479 "strip_size_kb": 0, 00:11:52.479 "state": "online", 00:11:52.479 "raid_level": "raid1", 00:11:52.479 "superblock": true, 00:11:52.479 "num_base_bdevs": 2, 00:11:52.479 "num_base_bdevs_discovered": 1, 00:11:52.479 "num_base_bdevs_operational": 1, 00:11:52.479 "base_bdevs_list": [ 00:11:52.479 { 00:11:52.479 "name": null, 00:11:52.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.479 "is_configured": false, 00:11:52.479 "data_offset": 0, 00:11:52.479 "data_size": 63488 00:11:52.479 }, 00:11:52.479 { 00:11:52.479 "name": "BaseBdev2", 00:11:52.479 "uuid": "5ef3f697-0977-59ab-b6ed-b25cd384d772", 00:11:52.479 "is_configured": true, 00:11:52.479 "data_offset": 2048, 00:11:52.479 "data_size": 63488 00:11:52.479 } 00:11:52.479 ] 00:11:52.479 }' 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.479 07:08:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.046 [2024-11-20 07:08:50.176414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.046 [2024-11-20 07:08:50.176448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.046 [2024-11-20 07:08:50.179941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.046 [2024-11-20 07:08:50.180109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.046 [2024-11-20 07:08:50.180237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.046 [2024-11-20 07:08:50.180418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:53.046 { 00:11:53.046 "results": [ 00:11:53.046 { 00:11:53.046 "job": "raid_bdev1", 00:11:53.046 "core_mask": "0x1", 00:11:53.046 "workload": "randrw", 00:11:53.046 "percentage": 50, 00:11:53.046 "status": "finished", 00:11:53.046 "queue_depth": 1, 00:11:53.046 "io_size": 131072, 00:11:53.046 "runtime": 1.479151, 00:11:53.046 "iops": 13156.195682523285, 00:11:53.046 "mibps": 1644.5244603154106, 00:11:53.046 "io_failed": 0, 00:11:53.046 "io_timeout": 0, 00:11:53.046 "avg_latency_us": 71.56974156778475, 00:11:53.046 "min_latency_us": 37.236363636363635, 00:11:53.046 "max_latency_us": 1936.290909090909 00:11:53.046 } 00:11:53.046 ], 00:11:53.046 "core_count": 1 00:11:53.046 } 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63584 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63584 ']' 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63584 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63584 00:11:53.046 killing process with pid 63584 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63584' 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63584 00:11:53.046 [2024-11-20 07:08:50.218880] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.046 07:08:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63584 00:11:53.046 [2024-11-20 07:08:50.340635] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:54.425 07:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gm4xncbBxQ 00:11:54.425 07:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:54.425 07:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:54.425 07:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:54.425 07:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:54.425 07:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:54.425 07:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:54.425 07:08:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:54.425 00:11:54.425 real 0m4.731s 00:11:54.425 user 0m6.059s 00:11:54.425 sys 0m0.550s 00:11:54.425 07:08:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.425 07:08:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.425 ************************************ 00:11:54.425 END TEST raid_write_error_test 00:11:54.425 ************************************ 00:11:54.425 07:08:51 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:54.425 07:08:51 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:54.425 07:08:51 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:11:54.425 07:08:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:54.425 07:08:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.425 07:08:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:54.425 ************************************ 00:11:54.425 START TEST raid_state_function_test 00:11:54.425 ************************************ 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63728 00:11:54.425 Process raid pid: 63728 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63728' 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63728 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63728 ']' 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.425 07:08:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.425 [2024-11-20 07:08:51.622385] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:11:54.425 [2024-11-20 07:08:51.622576] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.684 [2024-11-20 07:08:51.809336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.684 [2024-11-20 07:08:51.947756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.943 [2024-11-20 07:08:52.161216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.943 [2024-11-20 07:08:52.161289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.510 [2024-11-20 07:08:52.668997] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.510 [2024-11-20 07:08:52.669066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.510 [2024-11-20 07:08:52.669083] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.510 [2024-11-20 07:08:52.669099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.510 [2024-11-20 07:08:52.669110] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.510 [2024-11-20 07:08:52.669123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.510 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.511 "name": "Existed_Raid", 00:11:55.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.511 "strip_size_kb": 64, 00:11:55.511 "state": "configuring", 00:11:55.511 "raid_level": "raid0", 00:11:55.511 "superblock": false, 00:11:55.511 "num_base_bdevs": 3, 00:11:55.511 "num_base_bdevs_discovered": 0, 00:11:55.511 "num_base_bdevs_operational": 3, 00:11:55.511 "base_bdevs_list": [ 00:11:55.511 { 00:11:55.511 "name": "BaseBdev1", 00:11:55.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.511 "is_configured": false, 00:11:55.511 "data_offset": 0, 00:11:55.511 "data_size": 0 00:11:55.511 }, 00:11:55.511 { 00:11:55.511 "name": "BaseBdev2", 00:11:55.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.511 "is_configured": false, 00:11:55.511 "data_offset": 0, 00:11:55.511 "data_size": 0 00:11:55.511 }, 00:11:55.511 { 00:11:55.511 "name": "BaseBdev3", 00:11:55.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.511 "is_configured": false, 00:11:55.511 "data_offset": 0, 00:11:55.511 "data_size": 0 00:11:55.511 } 00:11:55.511 ] 00:11:55.511 }' 00:11:55.511 07:08:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.511 07:08:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.077 [2024-11-20 07:08:53.209079] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.077 [2024-11-20 07:08:53.209140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.077 [2024-11-20 07:08:53.217081] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.077 [2024-11-20 07:08:53.217136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.077 [2024-11-20 07:08:53.217151] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.077 [2024-11-20 07:08:53.217166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.077 [2024-11-20 07:08:53.217176] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.077 [2024-11-20 07:08:53.217190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.077 [2024-11-20 07:08:53.263358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.077 BaseBdev1 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:56.077 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.078 [ 00:11:56.078 { 00:11:56.078 "name": "BaseBdev1", 00:11:56.078 "aliases": [ 00:11:56.078 "bc8ab479-8a0f-4081-be00-909c03324837" 00:11:56.078 ], 00:11:56.078 "product_name": "Malloc disk", 00:11:56.078 "block_size": 512, 00:11:56.078 "num_blocks": 65536, 00:11:56.078 "uuid": "bc8ab479-8a0f-4081-be00-909c03324837", 00:11:56.078 "assigned_rate_limits": { 00:11:56.078 "rw_ios_per_sec": 0, 00:11:56.078 "rw_mbytes_per_sec": 0, 00:11:56.078 "r_mbytes_per_sec": 0, 00:11:56.078 "w_mbytes_per_sec": 0 00:11:56.078 }, 00:11:56.078 "claimed": true, 00:11:56.078 "claim_type": "exclusive_write", 00:11:56.078 "zoned": false, 00:11:56.078 "supported_io_types": { 00:11:56.078 "read": true, 00:11:56.078 "write": true, 00:11:56.078 "unmap": true, 00:11:56.078 "flush": true, 00:11:56.078 "reset": true, 00:11:56.078 "nvme_admin": false, 00:11:56.078 "nvme_io": false, 00:11:56.078 "nvme_io_md": false, 00:11:56.078 "write_zeroes": true, 00:11:56.078 "zcopy": true, 00:11:56.078 "get_zone_info": false, 00:11:56.078 "zone_management": false, 00:11:56.078 "zone_append": false, 00:11:56.078 "compare": false, 00:11:56.078 "compare_and_write": false, 00:11:56.078 "abort": true, 00:11:56.078 "seek_hole": false, 00:11:56.078 "seek_data": false, 00:11:56.078 "copy": true, 00:11:56.078 "nvme_iov_md": false 00:11:56.078 }, 00:11:56.078 "memory_domains": [ 00:11:56.078 { 00:11:56.078 "dma_device_id": "system", 00:11:56.078 "dma_device_type": 1 00:11:56.078 }, 00:11:56.078 { 00:11:56.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.078 "dma_device_type": 2 00:11:56.078 } 00:11:56.078 ], 00:11:56.078 "driver_specific": {} 00:11:56.078 } 00:11:56.078 ] 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.078 "name": "Existed_Raid", 00:11:56.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.078 "strip_size_kb": 64, 00:11:56.078 "state": "configuring", 00:11:56.078 "raid_level": "raid0", 00:11:56.078 "superblock": false, 00:11:56.078 "num_base_bdevs": 3, 00:11:56.078 "num_base_bdevs_discovered": 1, 00:11:56.078 "num_base_bdevs_operational": 3, 00:11:56.078 "base_bdevs_list": [ 00:11:56.078 { 00:11:56.078 "name": "BaseBdev1", 00:11:56.078 "uuid": "bc8ab479-8a0f-4081-be00-909c03324837", 00:11:56.078 "is_configured": true, 00:11:56.078 "data_offset": 0, 00:11:56.078 "data_size": 65536 00:11:56.078 }, 00:11:56.078 { 00:11:56.078 "name": "BaseBdev2", 00:11:56.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.078 "is_configured": false, 00:11:56.078 "data_offset": 0, 00:11:56.078 "data_size": 0 00:11:56.078 }, 00:11:56.078 { 00:11:56.078 "name": "BaseBdev3", 00:11:56.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.078 "is_configured": false, 00:11:56.078 "data_offset": 0, 00:11:56.078 "data_size": 0 00:11:56.078 } 00:11:56.078 ] 00:11:56.078 }' 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.078 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.646 [2024-11-20 07:08:53.835546] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.646 [2024-11-20 07:08:53.835612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.646 [2024-11-20 07:08:53.843631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.646 [2024-11-20 07:08:53.846132] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.646 [2024-11-20 07:08:53.846191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.646 [2024-11-20 07:08:53.846207] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.646 [2024-11-20 07:08:53.846223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.646 "name": "Existed_Raid", 00:11:56.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.646 "strip_size_kb": 64, 00:11:56.646 "state": "configuring", 00:11:56.646 "raid_level": "raid0", 00:11:56.646 "superblock": false, 00:11:56.646 "num_base_bdevs": 3, 00:11:56.646 "num_base_bdevs_discovered": 1, 00:11:56.646 "num_base_bdevs_operational": 3, 00:11:56.646 "base_bdevs_list": [ 00:11:56.646 { 00:11:56.646 "name": "BaseBdev1", 00:11:56.646 "uuid": "bc8ab479-8a0f-4081-be00-909c03324837", 00:11:56.646 "is_configured": true, 00:11:56.646 "data_offset": 0, 00:11:56.646 "data_size": 65536 00:11:56.646 }, 00:11:56.646 { 00:11:56.646 "name": "BaseBdev2", 00:11:56.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.646 "is_configured": false, 00:11:56.646 "data_offset": 0, 00:11:56.646 "data_size": 0 00:11:56.646 }, 00:11:56.646 { 00:11:56.646 "name": "BaseBdev3", 00:11:56.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.646 "is_configured": false, 00:11:56.646 "data_offset": 0, 00:11:56.646 "data_size": 0 00:11:56.646 } 00:11:56.646 ] 00:11:56.646 }' 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.646 07:08:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.213 [2024-11-20 07:08:54.439093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.213 BaseBdev2 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.213 [ 00:11:57.213 { 00:11:57.213 "name": "BaseBdev2", 00:11:57.213 "aliases": [ 00:11:57.213 "625d4594-8d7d-4a94-a335-1eb50175bcbf" 00:11:57.213 ], 00:11:57.213 "product_name": "Malloc disk", 00:11:57.213 "block_size": 512, 00:11:57.213 "num_blocks": 65536, 00:11:57.213 "uuid": "625d4594-8d7d-4a94-a335-1eb50175bcbf", 00:11:57.213 "assigned_rate_limits": { 00:11:57.213 "rw_ios_per_sec": 0, 00:11:57.213 "rw_mbytes_per_sec": 0, 00:11:57.213 "r_mbytes_per_sec": 0, 00:11:57.213 "w_mbytes_per_sec": 0 00:11:57.213 }, 00:11:57.213 "claimed": true, 00:11:57.213 "claim_type": "exclusive_write", 00:11:57.213 "zoned": false, 00:11:57.213 "supported_io_types": { 00:11:57.213 "read": true, 00:11:57.213 "write": true, 00:11:57.213 "unmap": true, 00:11:57.213 "flush": true, 00:11:57.213 "reset": true, 00:11:57.213 "nvme_admin": false, 00:11:57.213 "nvme_io": false, 00:11:57.213 "nvme_io_md": false, 00:11:57.213 "write_zeroes": true, 00:11:57.213 "zcopy": true, 00:11:57.213 "get_zone_info": false, 00:11:57.213 "zone_management": false, 00:11:57.213 "zone_append": false, 00:11:57.213 "compare": false, 00:11:57.213 "compare_and_write": false, 00:11:57.213 "abort": true, 00:11:57.213 "seek_hole": false, 00:11:57.213 "seek_data": false, 00:11:57.213 "copy": true, 00:11:57.213 "nvme_iov_md": false 00:11:57.213 }, 00:11:57.213 "memory_domains": [ 00:11:57.213 { 00:11:57.213 "dma_device_id": "system", 00:11:57.213 "dma_device_type": 1 00:11:57.213 }, 00:11:57.213 { 00:11:57.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.213 "dma_device_type": 2 00:11:57.213 } 00:11:57.213 ], 00:11:57.213 "driver_specific": {} 00:11:57.213 } 00:11:57.213 ] 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.213 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.472 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.472 "name": "Existed_Raid", 00:11:57.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.472 "strip_size_kb": 64, 00:11:57.472 "state": "configuring", 00:11:57.472 "raid_level": "raid0", 00:11:57.472 "superblock": false, 00:11:57.472 "num_base_bdevs": 3, 00:11:57.472 "num_base_bdevs_discovered": 2, 00:11:57.472 "num_base_bdevs_operational": 3, 00:11:57.472 "base_bdevs_list": [ 00:11:57.472 { 00:11:57.472 "name": "BaseBdev1", 00:11:57.472 "uuid": "bc8ab479-8a0f-4081-be00-909c03324837", 00:11:57.472 "is_configured": true, 00:11:57.472 "data_offset": 0, 00:11:57.472 "data_size": 65536 00:11:57.472 }, 00:11:57.472 { 00:11:57.472 "name": "BaseBdev2", 00:11:57.472 "uuid": "625d4594-8d7d-4a94-a335-1eb50175bcbf", 00:11:57.472 "is_configured": true, 00:11:57.472 "data_offset": 0, 00:11:57.472 "data_size": 65536 00:11:57.472 }, 00:11:57.472 { 00:11:57.472 "name": "BaseBdev3", 00:11:57.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.472 "is_configured": false, 00:11:57.472 "data_offset": 0, 00:11:57.472 "data_size": 0 00:11:57.472 } 00:11:57.472 ] 00:11:57.472 }' 00:11:57.472 07:08:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.472 07:08:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.076 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:58.076 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.077 [2024-11-20 07:08:55.149568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.077 [2024-11-20 07:08:55.149623] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:58.077 [2024-11-20 07:08:55.149643] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:58.077 [2024-11-20 07:08:55.150013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:58.077 [2024-11-20 07:08:55.150226] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:58.077 [2024-11-20 07:08:55.150253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:58.077 [2024-11-20 07:08:55.150578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.077 BaseBdev3 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.077 [ 00:11:58.077 { 00:11:58.077 "name": "BaseBdev3", 00:11:58.077 "aliases": [ 00:11:58.077 "6407c2ee-b954-48c4-a287-bb2a32ada3ca" 00:11:58.077 ], 00:11:58.077 "product_name": "Malloc disk", 00:11:58.077 "block_size": 512, 00:11:58.077 "num_blocks": 65536, 00:11:58.077 "uuid": "6407c2ee-b954-48c4-a287-bb2a32ada3ca", 00:11:58.077 "assigned_rate_limits": { 00:11:58.077 "rw_ios_per_sec": 0, 00:11:58.077 "rw_mbytes_per_sec": 0, 00:11:58.077 "r_mbytes_per_sec": 0, 00:11:58.077 "w_mbytes_per_sec": 0 00:11:58.077 }, 00:11:58.077 "claimed": true, 00:11:58.077 "claim_type": "exclusive_write", 00:11:58.077 "zoned": false, 00:11:58.077 "supported_io_types": { 00:11:58.077 "read": true, 00:11:58.077 "write": true, 00:11:58.077 "unmap": true, 00:11:58.077 "flush": true, 00:11:58.077 "reset": true, 00:11:58.077 "nvme_admin": false, 00:11:58.077 "nvme_io": false, 00:11:58.077 "nvme_io_md": false, 00:11:58.077 "write_zeroes": true, 00:11:58.077 "zcopy": true, 00:11:58.077 "get_zone_info": false, 00:11:58.077 "zone_management": false, 00:11:58.077 "zone_append": false, 00:11:58.077 "compare": false, 00:11:58.077 "compare_and_write": false, 00:11:58.077 "abort": true, 00:11:58.077 "seek_hole": false, 00:11:58.077 "seek_data": false, 00:11:58.077 "copy": true, 00:11:58.077 "nvme_iov_md": false 00:11:58.077 }, 00:11:58.077 "memory_domains": [ 00:11:58.077 { 00:11:58.077 "dma_device_id": "system", 00:11:58.077 "dma_device_type": 1 00:11:58.077 }, 00:11:58.077 { 00:11:58.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.077 "dma_device_type": 2 00:11:58.077 } 00:11:58.077 ], 00:11:58.077 "driver_specific": {} 00:11:58.077 } 00:11:58.077 ] 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.077 "name": "Existed_Raid", 00:11:58.077 "uuid": "2d61ffd4-6ab7-435b-a75f-53d3692a0a15", 00:11:58.077 "strip_size_kb": 64, 00:11:58.077 "state": "online", 00:11:58.077 "raid_level": "raid0", 00:11:58.077 "superblock": false, 00:11:58.077 "num_base_bdevs": 3, 00:11:58.077 "num_base_bdevs_discovered": 3, 00:11:58.077 "num_base_bdevs_operational": 3, 00:11:58.077 "base_bdevs_list": [ 00:11:58.077 { 00:11:58.077 "name": "BaseBdev1", 00:11:58.077 "uuid": "bc8ab479-8a0f-4081-be00-909c03324837", 00:11:58.077 "is_configured": true, 00:11:58.077 "data_offset": 0, 00:11:58.077 "data_size": 65536 00:11:58.077 }, 00:11:58.077 { 00:11:58.077 "name": "BaseBdev2", 00:11:58.077 "uuid": "625d4594-8d7d-4a94-a335-1eb50175bcbf", 00:11:58.077 "is_configured": true, 00:11:58.077 "data_offset": 0, 00:11:58.077 "data_size": 65536 00:11:58.077 }, 00:11:58.077 { 00:11:58.077 "name": "BaseBdev3", 00:11:58.077 "uuid": "6407c2ee-b954-48c4-a287-bb2a32ada3ca", 00:11:58.077 "is_configured": true, 00:11:58.077 "data_offset": 0, 00:11:58.077 "data_size": 65536 00:11:58.077 } 00:11:58.077 ] 00:11:58.077 }' 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.077 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.645 [2024-11-20 07:08:55.730213] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:58.645 "name": "Existed_Raid", 00:11:58.645 "aliases": [ 00:11:58.645 "2d61ffd4-6ab7-435b-a75f-53d3692a0a15" 00:11:58.645 ], 00:11:58.645 "product_name": "Raid Volume", 00:11:58.645 "block_size": 512, 00:11:58.645 "num_blocks": 196608, 00:11:58.645 "uuid": "2d61ffd4-6ab7-435b-a75f-53d3692a0a15", 00:11:58.645 "assigned_rate_limits": { 00:11:58.645 "rw_ios_per_sec": 0, 00:11:58.645 "rw_mbytes_per_sec": 0, 00:11:58.645 "r_mbytes_per_sec": 0, 00:11:58.645 "w_mbytes_per_sec": 0 00:11:58.645 }, 00:11:58.645 "claimed": false, 00:11:58.645 "zoned": false, 00:11:58.645 "supported_io_types": { 00:11:58.645 "read": true, 00:11:58.645 "write": true, 00:11:58.645 "unmap": true, 00:11:58.645 "flush": true, 00:11:58.645 "reset": true, 00:11:58.645 "nvme_admin": false, 00:11:58.645 "nvme_io": false, 00:11:58.645 "nvme_io_md": false, 00:11:58.645 "write_zeroes": true, 00:11:58.645 "zcopy": false, 00:11:58.645 "get_zone_info": false, 00:11:58.645 "zone_management": false, 00:11:58.645 "zone_append": false, 00:11:58.645 "compare": false, 00:11:58.645 "compare_and_write": false, 00:11:58.645 "abort": false, 00:11:58.645 "seek_hole": false, 00:11:58.645 "seek_data": false, 00:11:58.645 "copy": false, 00:11:58.645 "nvme_iov_md": false 00:11:58.645 }, 00:11:58.645 "memory_domains": [ 00:11:58.645 { 00:11:58.645 "dma_device_id": "system", 00:11:58.645 "dma_device_type": 1 00:11:58.645 }, 00:11:58.645 { 00:11:58.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.645 "dma_device_type": 2 00:11:58.645 }, 00:11:58.645 { 00:11:58.645 "dma_device_id": "system", 00:11:58.645 "dma_device_type": 1 00:11:58.645 }, 00:11:58.645 { 00:11:58.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.645 "dma_device_type": 2 00:11:58.645 }, 00:11:58.645 { 00:11:58.645 "dma_device_id": "system", 00:11:58.645 "dma_device_type": 1 00:11:58.645 }, 00:11:58.645 { 00:11:58.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.645 "dma_device_type": 2 00:11:58.645 } 00:11:58.645 ], 00:11:58.645 "driver_specific": { 00:11:58.645 "raid": { 00:11:58.645 "uuid": "2d61ffd4-6ab7-435b-a75f-53d3692a0a15", 00:11:58.645 "strip_size_kb": 64, 00:11:58.645 "state": "online", 00:11:58.645 "raid_level": "raid0", 00:11:58.645 "superblock": false, 00:11:58.645 "num_base_bdevs": 3, 00:11:58.645 "num_base_bdevs_discovered": 3, 00:11:58.645 "num_base_bdevs_operational": 3, 00:11:58.645 "base_bdevs_list": [ 00:11:58.645 { 00:11:58.645 "name": "BaseBdev1", 00:11:58.645 "uuid": "bc8ab479-8a0f-4081-be00-909c03324837", 00:11:58.645 "is_configured": true, 00:11:58.645 "data_offset": 0, 00:11:58.645 "data_size": 65536 00:11:58.645 }, 00:11:58.645 { 00:11:58.645 "name": "BaseBdev2", 00:11:58.645 "uuid": "625d4594-8d7d-4a94-a335-1eb50175bcbf", 00:11:58.645 "is_configured": true, 00:11:58.645 "data_offset": 0, 00:11:58.645 "data_size": 65536 00:11:58.645 }, 00:11:58.645 { 00:11:58.645 "name": "BaseBdev3", 00:11:58.645 "uuid": "6407c2ee-b954-48c4-a287-bb2a32ada3ca", 00:11:58.645 "is_configured": true, 00:11:58.645 "data_offset": 0, 00:11:58.645 "data_size": 65536 00:11:58.645 } 00:11:58.645 ] 00:11:58.645 } 00:11:58.645 } 00:11:58.645 }' 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:58.645 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:58.645 BaseBdev2 00:11:58.645 BaseBdev3' 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.646 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.904 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.904 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.904 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.904 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.904 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:58.904 07:08:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.904 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.904 07:08:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.904 [2024-11-20 07:08:56.054038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:58.904 [2024-11-20 07:08:56.054080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.904 [2024-11-20 07:08:56.054154] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.904 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.905 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:58.905 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.905 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.905 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.905 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.905 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.905 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.905 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.905 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.905 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.905 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.905 "name": "Existed_Raid", 00:11:58.905 "uuid": "2d61ffd4-6ab7-435b-a75f-53d3692a0a15", 00:11:58.905 "strip_size_kb": 64, 00:11:58.905 "state": "offline", 00:11:58.905 "raid_level": "raid0", 00:11:58.905 "superblock": false, 00:11:58.905 "num_base_bdevs": 3, 00:11:58.905 "num_base_bdevs_discovered": 2, 00:11:58.905 "num_base_bdevs_operational": 2, 00:11:58.905 "base_bdevs_list": [ 00:11:58.905 { 00:11:58.905 "name": null, 00:11:58.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.905 "is_configured": false, 00:11:58.905 "data_offset": 0, 00:11:58.905 "data_size": 65536 00:11:58.905 }, 00:11:58.905 { 00:11:58.905 "name": "BaseBdev2", 00:11:58.905 "uuid": "625d4594-8d7d-4a94-a335-1eb50175bcbf", 00:11:58.905 "is_configured": true, 00:11:58.905 "data_offset": 0, 00:11:58.905 "data_size": 65536 00:11:58.905 }, 00:11:58.905 { 00:11:58.905 "name": "BaseBdev3", 00:11:58.905 "uuid": "6407c2ee-b954-48c4-a287-bb2a32ada3ca", 00:11:58.905 "is_configured": true, 00:11:58.905 "data_offset": 0, 00:11:58.905 "data_size": 65536 00:11:58.905 } 00:11:58.905 ] 00:11:58.905 }' 00:11:58.905 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.905 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.473 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:59.473 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.473 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.473 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.473 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.474 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:59.474 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.474 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:59.474 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:59.474 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:59.474 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.474 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.474 [2024-11-20 07:08:56.719262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:59.732 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.732 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:59.732 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.732 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.732 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:59.732 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.732 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.732 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.733 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:59.733 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:59.733 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:59.733 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.733 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.733 [2024-11-20 07:08:56.895790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:59.733 [2024-11-20 07:08:56.895857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:59.733 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.733 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:59.733 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.733 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:59.733 07:08:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.733 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.733 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.733 07:08:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.733 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:59.733 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:59.733 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:59.733 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:59.733 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.733 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:59.733 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.733 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.991 BaseBdev2 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.991 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.991 [ 00:11:59.991 { 00:11:59.991 "name": "BaseBdev2", 00:11:59.991 "aliases": [ 00:11:59.991 "ff5d4c5e-a4c5-4dc2-ae7e-8e46e2a69ec8" 00:11:59.991 ], 00:11:59.991 "product_name": "Malloc disk", 00:11:59.991 "block_size": 512, 00:11:59.991 "num_blocks": 65536, 00:11:59.991 "uuid": "ff5d4c5e-a4c5-4dc2-ae7e-8e46e2a69ec8", 00:11:59.991 "assigned_rate_limits": { 00:11:59.991 "rw_ios_per_sec": 0, 00:11:59.991 "rw_mbytes_per_sec": 0, 00:11:59.991 "r_mbytes_per_sec": 0, 00:11:59.991 "w_mbytes_per_sec": 0 00:11:59.991 }, 00:11:59.991 "claimed": false, 00:11:59.991 "zoned": false, 00:11:59.991 "supported_io_types": { 00:11:59.991 "read": true, 00:11:59.991 "write": true, 00:11:59.991 "unmap": true, 00:11:59.991 "flush": true, 00:11:59.991 "reset": true, 00:11:59.991 "nvme_admin": false, 00:11:59.991 "nvme_io": false, 00:11:59.991 "nvme_io_md": false, 00:11:59.991 "write_zeroes": true, 00:11:59.991 "zcopy": true, 00:11:59.991 "get_zone_info": false, 00:11:59.991 "zone_management": false, 00:11:59.991 "zone_append": false, 00:11:59.992 "compare": false, 00:11:59.992 "compare_and_write": false, 00:11:59.992 "abort": true, 00:11:59.992 "seek_hole": false, 00:11:59.992 "seek_data": false, 00:11:59.992 "copy": true, 00:11:59.992 "nvme_iov_md": false 00:11:59.992 }, 00:11:59.992 "memory_domains": [ 00:11:59.992 { 00:11:59.992 "dma_device_id": "system", 00:11:59.992 "dma_device_type": 1 00:11:59.992 }, 00:11:59.992 { 00:11:59.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.992 "dma_device_type": 2 00:11:59.992 } 00:11:59.992 ], 00:11:59.992 "driver_specific": {} 00:11:59.992 } 00:11:59.992 ] 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.992 BaseBdev3 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.992 [ 00:11:59.992 { 00:11:59.992 "name": "BaseBdev3", 00:11:59.992 "aliases": [ 00:11:59.992 "0d809b29-b1ee-462d-9575-91da3340fb96" 00:11:59.992 ], 00:11:59.992 "product_name": "Malloc disk", 00:11:59.992 "block_size": 512, 00:11:59.992 "num_blocks": 65536, 00:11:59.992 "uuid": "0d809b29-b1ee-462d-9575-91da3340fb96", 00:11:59.992 "assigned_rate_limits": { 00:11:59.992 "rw_ios_per_sec": 0, 00:11:59.992 "rw_mbytes_per_sec": 0, 00:11:59.992 "r_mbytes_per_sec": 0, 00:11:59.992 "w_mbytes_per_sec": 0 00:11:59.992 }, 00:11:59.992 "claimed": false, 00:11:59.992 "zoned": false, 00:11:59.992 "supported_io_types": { 00:11:59.992 "read": true, 00:11:59.992 "write": true, 00:11:59.992 "unmap": true, 00:11:59.992 "flush": true, 00:11:59.992 "reset": true, 00:11:59.992 "nvme_admin": false, 00:11:59.992 "nvme_io": false, 00:11:59.992 "nvme_io_md": false, 00:11:59.992 "write_zeroes": true, 00:11:59.992 "zcopy": true, 00:11:59.992 "get_zone_info": false, 00:11:59.992 "zone_management": false, 00:11:59.992 "zone_append": false, 00:11:59.992 "compare": false, 00:11:59.992 "compare_and_write": false, 00:11:59.992 "abort": true, 00:11:59.992 "seek_hole": false, 00:11:59.992 "seek_data": false, 00:11:59.992 "copy": true, 00:11:59.992 "nvme_iov_md": false 00:11:59.992 }, 00:11:59.992 "memory_domains": [ 00:11:59.992 { 00:11:59.992 "dma_device_id": "system", 00:11:59.992 "dma_device_type": 1 00:11:59.992 }, 00:11:59.992 { 00:11:59.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.992 "dma_device_type": 2 00:11:59.992 } 00:11:59.992 ], 00:11:59.992 "driver_specific": {} 00:11:59.992 } 00:11:59.992 ] 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.992 [2024-11-20 07:08:57.179013] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:59.992 [2024-11-20 07:08:57.179073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:59.992 [2024-11-20 07:08:57.179104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.992 [2024-11-20 07:08:57.181650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.992 "name": "Existed_Raid", 00:11:59.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.992 "strip_size_kb": 64, 00:11:59.992 "state": "configuring", 00:11:59.992 "raid_level": "raid0", 00:11:59.992 "superblock": false, 00:11:59.992 "num_base_bdevs": 3, 00:11:59.992 "num_base_bdevs_discovered": 2, 00:11:59.992 "num_base_bdevs_operational": 3, 00:11:59.992 "base_bdevs_list": [ 00:11:59.992 { 00:11:59.992 "name": "BaseBdev1", 00:11:59.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.992 "is_configured": false, 00:11:59.992 "data_offset": 0, 00:11:59.992 "data_size": 0 00:11:59.992 }, 00:11:59.992 { 00:11:59.992 "name": "BaseBdev2", 00:11:59.992 "uuid": "ff5d4c5e-a4c5-4dc2-ae7e-8e46e2a69ec8", 00:11:59.992 "is_configured": true, 00:11:59.992 "data_offset": 0, 00:11:59.992 "data_size": 65536 00:11:59.992 }, 00:11:59.992 { 00:11:59.992 "name": "BaseBdev3", 00:11:59.992 "uuid": "0d809b29-b1ee-462d-9575-91da3340fb96", 00:11:59.992 "is_configured": true, 00:11:59.992 "data_offset": 0, 00:11:59.992 "data_size": 65536 00:11:59.992 } 00:11:59.992 ] 00:11:59.992 }' 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.992 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 [2024-11-20 07:08:57.719173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.558 "name": "Existed_Raid", 00:12:00.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.558 "strip_size_kb": 64, 00:12:00.558 "state": "configuring", 00:12:00.558 "raid_level": "raid0", 00:12:00.558 "superblock": false, 00:12:00.558 "num_base_bdevs": 3, 00:12:00.558 "num_base_bdevs_discovered": 1, 00:12:00.558 "num_base_bdevs_operational": 3, 00:12:00.558 "base_bdevs_list": [ 00:12:00.558 { 00:12:00.558 "name": "BaseBdev1", 00:12:00.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.558 "is_configured": false, 00:12:00.558 "data_offset": 0, 00:12:00.558 "data_size": 0 00:12:00.558 }, 00:12:00.558 { 00:12:00.558 "name": null, 00:12:00.558 "uuid": "ff5d4c5e-a4c5-4dc2-ae7e-8e46e2a69ec8", 00:12:00.558 "is_configured": false, 00:12:00.558 "data_offset": 0, 00:12:00.558 "data_size": 65536 00:12:00.558 }, 00:12:00.558 { 00:12:00.558 "name": "BaseBdev3", 00:12:00.558 "uuid": "0d809b29-b1ee-462d-9575-91da3340fb96", 00:12:00.558 "is_configured": true, 00:12:00.558 "data_offset": 0, 00:12:00.558 "data_size": 65536 00:12:00.558 } 00:12:00.558 ] 00:12:00.558 }' 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.558 07:08:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.125 [2024-11-20 07:08:58.378123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.125 BaseBdev1 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.125 [ 00:12:01.125 { 00:12:01.125 "name": "BaseBdev1", 00:12:01.125 "aliases": [ 00:12:01.125 "1a653593-9843-4af9-ab59-fd8a070df057" 00:12:01.125 ], 00:12:01.125 "product_name": "Malloc disk", 00:12:01.125 "block_size": 512, 00:12:01.125 "num_blocks": 65536, 00:12:01.125 "uuid": "1a653593-9843-4af9-ab59-fd8a070df057", 00:12:01.125 "assigned_rate_limits": { 00:12:01.125 "rw_ios_per_sec": 0, 00:12:01.125 "rw_mbytes_per_sec": 0, 00:12:01.125 "r_mbytes_per_sec": 0, 00:12:01.125 "w_mbytes_per_sec": 0 00:12:01.125 }, 00:12:01.125 "claimed": true, 00:12:01.125 "claim_type": "exclusive_write", 00:12:01.125 "zoned": false, 00:12:01.125 "supported_io_types": { 00:12:01.125 "read": true, 00:12:01.125 "write": true, 00:12:01.125 "unmap": true, 00:12:01.125 "flush": true, 00:12:01.125 "reset": true, 00:12:01.125 "nvme_admin": false, 00:12:01.125 "nvme_io": false, 00:12:01.125 "nvme_io_md": false, 00:12:01.125 "write_zeroes": true, 00:12:01.125 "zcopy": true, 00:12:01.125 "get_zone_info": false, 00:12:01.125 "zone_management": false, 00:12:01.125 "zone_append": false, 00:12:01.125 "compare": false, 00:12:01.125 "compare_and_write": false, 00:12:01.125 "abort": true, 00:12:01.125 "seek_hole": false, 00:12:01.125 "seek_data": false, 00:12:01.125 "copy": true, 00:12:01.125 "nvme_iov_md": false 00:12:01.125 }, 00:12:01.125 "memory_domains": [ 00:12:01.125 { 00:12:01.125 "dma_device_id": "system", 00:12:01.125 "dma_device_type": 1 00:12:01.125 }, 00:12:01.125 { 00:12:01.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.125 "dma_device_type": 2 00:12:01.125 } 00:12:01.125 ], 00:12:01.125 "driver_specific": {} 00:12:01.125 } 00:12:01.125 ] 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.125 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.384 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.384 "name": "Existed_Raid", 00:12:01.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.384 "strip_size_kb": 64, 00:12:01.384 "state": "configuring", 00:12:01.384 "raid_level": "raid0", 00:12:01.384 "superblock": false, 00:12:01.384 "num_base_bdevs": 3, 00:12:01.384 "num_base_bdevs_discovered": 2, 00:12:01.384 "num_base_bdevs_operational": 3, 00:12:01.384 "base_bdevs_list": [ 00:12:01.384 { 00:12:01.384 "name": "BaseBdev1", 00:12:01.384 "uuid": "1a653593-9843-4af9-ab59-fd8a070df057", 00:12:01.384 "is_configured": true, 00:12:01.384 "data_offset": 0, 00:12:01.384 "data_size": 65536 00:12:01.384 }, 00:12:01.384 { 00:12:01.384 "name": null, 00:12:01.384 "uuid": "ff5d4c5e-a4c5-4dc2-ae7e-8e46e2a69ec8", 00:12:01.384 "is_configured": false, 00:12:01.384 "data_offset": 0, 00:12:01.384 "data_size": 65536 00:12:01.384 }, 00:12:01.384 { 00:12:01.384 "name": "BaseBdev3", 00:12:01.384 "uuid": "0d809b29-b1ee-462d-9575-91da3340fb96", 00:12:01.384 "is_configured": true, 00:12:01.384 "data_offset": 0, 00:12:01.384 "data_size": 65536 00:12:01.384 } 00:12:01.384 ] 00:12:01.384 }' 00:12:01.384 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.384 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.643 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:01.643 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.643 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.643 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.643 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.903 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:01.903 07:08:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:01.903 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.903 07:08:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.903 [2024-11-20 07:08:58.998377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.903 "name": "Existed_Raid", 00:12:01.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.903 "strip_size_kb": 64, 00:12:01.903 "state": "configuring", 00:12:01.903 "raid_level": "raid0", 00:12:01.903 "superblock": false, 00:12:01.903 "num_base_bdevs": 3, 00:12:01.903 "num_base_bdevs_discovered": 1, 00:12:01.903 "num_base_bdevs_operational": 3, 00:12:01.903 "base_bdevs_list": [ 00:12:01.903 { 00:12:01.903 "name": "BaseBdev1", 00:12:01.903 "uuid": "1a653593-9843-4af9-ab59-fd8a070df057", 00:12:01.903 "is_configured": true, 00:12:01.903 "data_offset": 0, 00:12:01.903 "data_size": 65536 00:12:01.903 }, 00:12:01.903 { 00:12:01.903 "name": null, 00:12:01.903 "uuid": "ff5d4c5e-a4c5-4dc2-ae7e-8e46e2a69ec8", 00:12:01.903 "is_configured": false, 00:12:01.903 "data_offset": 0, 00:12:01.903 "data_size": 65536 00:12:01.903 }, 00:12:01.903 { 00:12:01.903 "name": null, 00:12:01.903 "uuid": "0d809b29-b1ee-462d-9575-91da3340fb96", 00:12:01.903 "is_configured": false, 00:12:01.903 "data_offset": 0, 00:12:01.903 "data_size": 65536 00:12:01.903 } 00:12:01.903 ] 00:12:01.903 }' 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.903 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.469 [2024-11-20 07:08:59.598632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.469 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.470 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.470 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.470 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.470 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.470 "name": "Existed_Raid", 00:12:02.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.470 "strip_size_kb": 64, 00:12:02.470 "state": "configuring", 00:12:02.470 "raid_level": "raid0", 00:12:02.470 "superblock": false, 00:12:02.470 "num_base_bdevs": 3, 00:12:02.470 "num_base_bdevs_discovered": 2, 00:12:02.470 "num_base_bdevs_operational": 3, 00:12:02.470 "base_bdevs_list": [ 00:12:02.470 { 00:12:02.470 "name": "BaseBdev1", 00:12:02.470 "uuid": "1a653593-9843-4af9-ab59-fd8a070df057", 00:12:02.470 "is_configured": true, 00:12:02.470 "data_offset": 0, 00:12:02.470 "data_size": 65536 00:12:02.470 }, 00:12:02.470 { 00:12:02.470 "name": null, 00:12:02.470 "uuid": "ff5d4c5e-a4c5-4dc2-ae7e-8e46e2a69ec8", 00:12:02.470 "is_configured": false, 00:12:02.470 "data_offset": 0, 00:12:02.470 "data_size": 65536 00:12:02.470 }, 00:12:02.470 { 00:12:02.470 "name": "BaseBdev3", 00:12:02.470 "uuid": "0d809b29-b1ee-462d-9575-91da3340fb96", 00:12:02.470 "is_configured": true, 00:12:02.470 "data_offset": 0, 00:12:02.470 "data_size": 65536 00:12:02.470 } 00:12:02.470 ] 00:12:02.470 }' 00:12:02.470 07:08:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.470 07:08:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.042 [2024-11-20 07:09:00.166807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.042 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.042 "name": "Existed_Raid", 00:12:03.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.042 "strip_size_kb": 64, 00:12:03.042 "state": "configuring", 00:12:03.042 "raid_level": "raid0", 00:12:03.042 "superblock": false, 00:12:03.042 "num_base_bdevs": 3, 00:12:03.042 "num_base_bdevs_discovered": 1, 00:12:03.042 "num_base_bdevs_operational": 3, 00:12:03.043 "base_bdevs_list": [ 00:12:03.043 { 00:12:03.043 "name": null, 00:12:03.043 "uuid": "1a653593-9843-4af9-ab59-fd8a070df057", 00:12:03.043 "is_configured": false, 00:12:03.043 "data_offset": 0, 00:12:03.043 "data_size": 65536 00:12:03.043 }, 00:12:03.043 { 00:12:03.043 "name": null, 00:12:03.043 "uuid": "ff5d4c5e-a4c5-4dc2-ae7e-8e46e2a69ec8", 00:12:03.043 "is_configured": false, 00:12:03.043 "data_offset": 0, 00:12:03.043 "data_size": 65536 00:12:03.043 }, 00:12:03.043 { 00:12:03.043 "name": "BaseBdev3", 00:12:03.043 "uuid": "0d809b29-b1ee-462d-9575-91da3340fb96", 00:12:03.043 "is_configured": true, 00:12:03.043 "data_offset": 0, 00:12:03.043 "data_size": 65536 00:12:03.043 } 00:12:03.043 ] 00:12:03.043 }' 00:12:03.043 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.043 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.617 [2024-11-20 07:09:00.822633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.617 "name": "Existed_Raid", 00:12:03.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.617 "strip_size_kb": 64, 00:12:03.617 "state": "configuring", 00:12:03.617 "raid_level": "raid0", 00:12:03.617 "superblock": false, 00:12:03.617 "num_base_bdevs": 3, 00:12:03.617 "num_base_bdevs_discovered": 2, 00:12:03.617 "num_base_bdevs_operational": 3, 00:12:03.617 "base_bdevs_list": [ 00:12:03.617 { 00:12:03.617 "name": null, 00:12:03.617 "uuid": "1a653593-9843-4af9-ab59-fd8a070df057", 00:12:03.617 "is_configured": false, 00:12:03.617 "data_offset": 0, 00:12:03.617 "data_size": 65536 00:12:03.617 }, 00:12:03.617 { 00:12:03.617 "name": "BaseBdev2", 00:12:03.617 "uuid": "ff5d4c5e-a4c5-4dc2-ae7e-8e46e2a69ec8", 00:12:03.617 "is_configured": true, 00:12:03.617 "data_offset": 0, 00:12:03.617 "data_size": 65536 00:12:03.617 }, 00:12:03.617 { 00:12:03.617 "name": "BaseBdev3", 00:12:03.617 "uuid": "0d809b29-b1ee-462d-9575-91da3340fb96", 00:12:03.617 "is_configured": true, 00:12:03.617 "data_offset": 0, 00:12:03.617 "data_size": 65536 00:12:03.617 } 00:12:03.617 ] 00:12:03.617 }' 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.617 07:09:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.184 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.184 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.184 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.184 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:04.184 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.184 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:04.184 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.184 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.184 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.185 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:04.185 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.185 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1a653593-9843-4af9-ab59-fd8a070df057 00:12:04.185 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.185 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.443 [2024-11-20 07:09:01.529944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:04.443 [2024-11-20 07:09:01.530026] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:04.444 [2024-11-20 07:09:01.530046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:04.444 [2024-11-20 07:09:01.530453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:04.444 [2024-11-20 07:09:01.530739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:04.444 [2024-11-20 07:09:01.530760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:04.444 [2024-11-20 07:09:01.531166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.444 NewBaseBdev 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.444 [ 00:12:04.444 { 00:12:04.444 "name": "NewBaseBdev", 00:12:04.444 "aliases": [ 00:12:04.444 "1a653593-9843-4af9-ab59-fd8a070df057" 00:12:04.444 ], 00:12:04.444 "product_name": "Malloc disk", 00:12:04.444 "block_size": 512, 00:12:04.444 "num_blocks": 65536, 00:12:04.444 "uuid": "1a653593-9843-4af9-ab59-fd8a070df057", 00:12:04.444 "assigned_rate_limits": { 00:12:04.444 "rw_ios_per_sec": 0, 00:12:04.444 "rw_mbytes_per_sec": 0, 00:12:04.444 "r_mbytes_per_sec": 0, 00:12:04.444 "w_mbytes_per_sec": 0 00:12:04.444 }, 00:12:04.444 "claimed": true, 00:12:04.444 "claim_type": "exclusive_write", 00:12:04.444 "zoned": false, 00:12:04.444 "supported_io_types": { 00:12:04.444 "read": true, 00:12:04.444 "write": true, 00:12:04.444 "unmap": true, 00:12:04.444 "flush": true, 00:12:04.444 "reset": true, 00:12:04.444 "nvme_admin": false, 00:12:04.444 "nvme_io": false, 00:12:04.444 "nvme_io_md": false, 00:12:04.444 "write_zeroes": true, 00:12:04.444 "zcopy": true, 00:12:04.444 "get_zone_info": false, 00:12:04.444 "zone_management": false, 00:12:04.444 "zone_append": false, 00:12:04.444 "compare": false, 00:12:04.444 "compare_and_write": false, 00:12:04.444 "abort": true, 00:12:04.444 "seek_hole": false, 00:12:04.444 "seek_data": false, 00:12:04.444 "copy": true, 00:12:04.444 "nvme_iov_md": false 00:12:04.444 }, 00:12:04.444 "memory_domains": [ 00:12:04.444 { 00:12:04.444 "dma_device_id": "system", 00:12:04.444 "dma_device_type": 1 00:12:04.444 }, 00:12:04.444 { 00:12:04.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.444 "dma_device_type": 2 00:12:04.444 } 00:12:04.444 ], 00:12:04.444 "driver_specific": {} 00:12:04.444 } 00:12:04.444 ] 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.444 "name": "Existed_Raid", 00:12:04.444 "uuid": "264a3415-13f2-4548-a934-f337e90734d6", 00:12:04.444 "strip_size_kb": 64, 00:12:04.444 "state": "online", 00:12:04.444 "raid_level": "raid0", 00:12:04.444 "superblock": false, 00:12:04.444 "num_base_bdevs": 3, 00:12:04.444 "num_base_bdevs_discovered": 3, 00:12:04.444 "num_base_bdevs_operational": 3, 00:12:04.444 "base_bdevs_list": [ 00:12:04.444 { 00:12:04.444 "name": "NewBaseBdev", 00:12:04.444 "uuid": "1a653593-9843-4af9-ab59-fd8a070df057", 00:12:04.444 "is_configured": true, 00:12:04.444 "data_offset": 0, 00:12:04.444 "data_size": 65536 00:12:04.444 }, 00:12:04.444 { 00:12:04.444 "name": "BaseBdev2", 00:12:04.444 "uuid": "ff5d4c5e-a4c5-4dc2-ae7e-8e46e2a69ec8", 00:12:04.444 "is_configured": true, 00:12:04.444 "data_offset": 0, 00:12:04.444 "data_size": 65536 00:12:04.444 }, 00:12:04.444 { 00:12:04.444 "name": "BaseBdev3", 00:12:04.444 "uuid": "0d809b29-b1ee-462d-9575-91da3340fb96", 00:12:04.444 "is_configured": true, 00:12:04.444 "data_offset": 0, 00:12:04.444 "data_size": 65536 00:12:04.444 } 00:12:04.444 ] 00:12:04.444 }' 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.444 07:09:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.012 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:05.012 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:05.012 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:05.012 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:05.012 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:05.012 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:05.012 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:05.012 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.012 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.012 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:05.012 [2024-11-20 07:09:02.106533] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.012 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.012 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:05.012 "name": "Existed_Raid", 00:12:05.012 "aliases": [ 00:12:05.012 "264a3415-13f2-4548-a934-f337e90734d6" 00:12:05.012 ], 00:12:05.012 "product_name": "Raid Volume", 00:12:05.012 "block_size": 512, 00:12:05.012 "num_blocks": 196608, 00:12:05.012 "uuid": "264a3415-13f2-4548-a934-f337e90734d6", 00:12:05.012 "assigned_rate_limits": { 00:12:05.012 "rw_ios_per_sec": 0, 00:12:05.012 "rw_mbytes_per_sec": 0, 00:12:05.012 "r_mbytes_per_sec": 0, 00:12:05.012 "w_mbytes_per_sec": 0 00:12:05.012 }, 00:12:05.012 "claimed": false, 00:12:05.012 "zoned": false, 00:12:05.012 "supported_io_types": { 00:12:05.012 "read": true, 00:12:05.012 "write": true, 00:12:05.012 "unmap": true, 00:12:05.012 "flush": true, 00:12:05.012 "reset": true, 00:12:05.012 "nvme_admin": false, 00:12:05.012 "nvme_io": false, 00:12:05.012 "nvme_io_md": false, 00:12:05.012 "write_zeroes": true, 00:12:05.012 "zcopy": false, 00:12:05.012 "get_zone_info": false, 00:12:05.012 "zone_management": false, 00:12:05.012 "zone_append": false, 00:12:05.012 "compare": false, 00:12:05.012 "compare_and_write": false, 00:12:05.012 "abort": false, 00:12:05.012 "seek_hole": false, 00:12:05.012 "seek_data": false, 00:12:05.012 "copy": false, 00:12:05.012 "nvme_iov_md": false 00:12:05.012 }, 00:12:05.012 "memory_domains": [ 00:12:05.012 { 00:12:05.012 "dma_device_id": "system", 00:12:05.012 "dma_device_type": 1 00:12:05.012 }, 00:12:05.012 { 00:12:05.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.012 "dma_device_type": 2 00:12:05.012 }, 00:12:05.012 { 00:12:05.012 "dma_device_id": "system", 00:12:05.012 "dma_device_type": 1 00:12:05.012 }, 00:12:05.012 { 00:12:05.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.012 "dma_device_type": 2 00:12:05.012 }, 00:12:05.012 { 00:12:05.012 "dma_device_id": "system", 00:12:05.012 "dma_device_type": 1 00:12:05.012 }, 00:12:05.012 { 00:12:05.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.012 "dma_device_type": 2 00:12:05.012 } 00:12:05.012 ], 00:12:05.012 "driver_specific": { 00:12:05.012 "raid": { 00:12:05.012 "uuid": "264a3415-13f2-4548-a934-f337e90734d6", 00:12:05.012 "strip_size_kb": 64, 00:12:05.012 "state": "online", 00:12:05.012 "raid_level": "raid0", 00:12:05.012 "superblock": false, 00:12:05.012 "num_base_bdevs": 3, 00:12:05.012 "num_base_bdevs_discovered": 3, 00:12:05.012 "num_base_bdevs_operational": 3, 00:12:05.012 "base_bdevs_list": [ 00:12:05.012 { 00:12:05.012 "name": "NewBaseBdev", 00:12:05.012 "uuid": "1a653593-9843-4af9-ab59-fd8a070df057", 00:12:05.012 "is_configured": true, 00:12:05.012 "data_offset": 0, 00:12:05.012 "data_size": 65536 00:12:05.012 }, 00:12:05.012 { 00:12:05.012 "name": "BaseBdev2", 00:12:05.012 "uuid": "ff5d4c5e-a4c5-4dc2-ae7e-8e46e2a69ec8", 00:12:05.012 "is_configured": true, 00:12:05.012 "data_offset": 0, 00:12:05.012 "data_size": 65536 00:12:05.012 }, 00:12:05.012 { 00:12:05.012 "name": "BaseBdev3", 00:12:05.012 "uuid": "0d809b29-b1ee-462d-9575-91da3340fb96", 00:12:05.012 "is_configured": true, 00:12:05.012 "data_offset": 0, 00:12:05.012 "data_size": 65536 00:12:05.012 } 00:12:05.012 ] 00:12:05.012 } 00:12:05.012 } 00:12:05.012 }' 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:05.013 BaseBdev2 00:12:05.013 BaseBdev3' 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.013 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.271 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.272 [2024-11-20 07:09:02.438297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:05.272 [2024-11-20 07:09:02.438342] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.272 [2024-11-20 07:09:02.438460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.272 [2024-11-20 07:09:02.438547] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.272 [2024-11-20 07:09:02.438568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:05.272 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.272 07:09:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63728 00:12:05.272 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63728 ']' 00:12:05.272 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63728 00:12:05.272 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:05.272 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:05.272 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63728 00:12:05.272 killing process with pid 63728 00:12:05.272 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:05.272 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:05.272 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63728' 00:12:05.272 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63728 00:12:05.272 [2024-11-20 07:09:02.477719] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:05.272 07:09:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63728 00:12:05.530 [2024-11-20 07:09:02.757083] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:06.906 ************************************ 00:12:06.906 END TEST raid_state_function_test 00:12:06.906 ************************************ 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:06.906 00:12:06.906 real 0m12.291s 00:12:06.906 user 0m20.559s 00:12:06.906 sys 0m1.624s 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.906 07:09:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:12:06.906 07:09:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:06.906 07:09:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.906 07:09:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:06.906 ************************************ 00:12:06.906 START TEST raid_state_function_test_sb 00:12:06.906 ************************************ 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64371 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:06.906 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64371' 00:12:06.906 Process raid pid: 64371 00:12:06.907 07:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64371 00:12:06.907 07:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64371 ']' 00:12:06.907 07:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.907 07:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.907 07:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.907 07:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.907 07:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.907 [2024-11-20 07:09:03.971751] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:12:06.907 [2024-11-20 07:09:03.972232] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.907 [2024-11-20 07:09:04.165265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.165 [2024-11-20 07:09:04.326496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.426 [2024-11-20 07:09:04.537996] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.426 [2024-11-20 07:09:04.538067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:07.721 07:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.721 07:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:07.721 07:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:07.721 07:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.721 07:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.721 [2024-11-20 07:09:04.965870] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.721 [2024-11-20 07:09:04.966001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.721 [2024-11-20 07:09:04.966028] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:07.722 [2024-11-20 07:09:04.966058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:07.722 [2024-11-20 07:09:04.966078] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:07.722 [2024-11-20 07:09:04.966109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.722 07:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.722 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.722 "name": "Existed_Raid", 00:12:07.722 "uuid": "78706421-5c22-4918-948f-05d99b9e0223", 00:12:07.722 "strip_size_kb": 64, 00:12:07.722 "state": "configuring", 00:12:07.722 "raid_level": "raid0", 00:12:07.722 "superblock": true, 00:12:07.722 "num_base_bdevs": 3, 00:12:07.722 "num_base_bdevs_discovered": 0, 00:12:07.722 "num_base_bdevs_operational": 3, 00:12:07.722 "base_bdevs_list": [ 00:12:07.722 { 00:12:07.722 "name": "BaseBdev1", 00:12:07.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.722 "is_configured": false, 00:12:07.722 "data_offset": 0, 00:12:07.722 "data_size": 0 00:12:07.722 }, 00:12:07.722 { 00:12:07.722 "name": "BaseBdev2", 00:12:07.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.722 "is_configured": false, 00:12:07.722 "data_offset": 0, 00:12:07.722 "data_size": 0 00:12:07.722 }, 00:12:07.722 { 00:12:07.722 "name": "BaseBdev3", 00:12:07.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.722 "is_configured": false, 00:12:07.722 "data_offset": 0, 00:12:07.722 "data_size": 0 00:12:07.722 } 00:12:07.722 ] 00:12:07.722 }' 00:12:07.722 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.722 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.290 [2024-11-20 07:09:05.489853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:08.290 [2024-11-20 07:09:05.489910] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.290 [2024-11-20 07:09:05.497845] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.290 [2024-11-20 07:09:05.497920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.290 [2024-11-20 07:09:05.497937] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.290 [2024-11-20 07:09:05.497954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.290 [2024-11-20 07:09:05.497964] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.290 [2024-11-20 07:09:05.497978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.290 [2024-11-20 07:09:05.542787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.290 BaseBdev1 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.290 [ 00:12:08.290 { 00:12:08.290 "name": "BaseBdev1", 00:12:08.290 "aliases": [ 00:12:08.290 "663ae321-007e-4977-abeb-003ecd2a6762" 00:12:08.290 ], 00:12:08.290 "product_name": "Malloc disk", 00:12:08.290 "block_size": 512, 00:12:08.290 "num_blocks": 65536, 00:12:08.290 "uuid": "663ae321-007e-4977-abeb-003ecd2a6762", 00:12:08.290 "assigned_rate_limits": { 00:12:08.290 "rw_ios_per_sec": 0, 00:12:08.290 "rw_mbytes_per_sec": 0, 00:12:08.290 "r_mbytes_per_sec": 0, 00:12:08.290 "w_mbytes_per_sec": 0 00:12:08.290 }, 00:12:08.290 "claimed": true, 00:12:08.290 "claim_type": "exclusive_write", 00:12:08.290 "zoned": false, 00:12:08.290 "supported_io_types": { 00:12:08.290 "read": true, 00:12:08.290 "write": true, 00:12:08.290 "unmap": true, 00:12:08.290 "flush": true, 00:12:08.290 "reset": true, 00:12:08.290 "nvme_admin": false, 00:12:08.290 "nvme_io": false, 00:12:08.290 "nvme_io_md": false, 00:12:08.290 "write_zeroes": true, 00:12:08.290 "zcopy": true, 00:12:08.290 "get_zone_info": false, 00:12:08.290 "zone_management": false, 00:12:08.290 "zone_append": false, 00:12:08.290 "compare": false, 00:12:08.290 "compare_and_write": false, 00:12:08.290 "abort": true, 00:12:08.290 "seek_hole": false, 00:12:08.290 "seek_data": false, 00:12:08.290 "copy": true, 00:12:08.290 "nvme_iov_md": false 00:12:08.290 }, 00:12:08.290 "memory_domains": [ 00:12:08.290 { 00:12:08.290 "dma_device_id": "system", 00:12:08.290 "dma_device_type": 1 00:12:08.290 }, 00:12:08.290 { 00:12:08.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.290 "dma_device_type": 2 00:12:08.290 } 00:12:08.290 ], 00:12:08.290 "driver_specific": {} 00:12:08.290 } 00:12:08.290 ] 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.290 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.549 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.549 "name": "Existed_Raid", 00:12:08.549 "uuid": "dc9fd2d3-666b-41b1-aad5-6f79971d245e", 00:12:08.549 "strip_size_kb": 64, 00:12:08.549 "state": "configuring", 00:12:08.549 "raid_level": "raid0", 00:12:08.549 "superblock": true, 00:12:08.549 "num_base_bdevs": 3, 00:12:08.549 "num_base_bdevs_discovered": 1, 00:12:08.549 "num_base_bdevs_operational": 3, 00:12:08.549 "base_bdevs_list": [ 00:12:08.549 { 00:12:08.549 "name": "BaseBdev1", 00:12:08.549 "uuid": "663ae321-007e-4977-abeb-003ecd2a6762", 00:12:08.549 "is_configured": true, 00:12:08.549 "data_offset": 2048, 00:12:08.549 "data_size": 63488 00:12:08.549 }, 00:12:08.549 { 00:12:08.549 "name": "BaseBdev2", 00:12:08.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.549 "is_configured": false, 00:12:08.549 "data_offset": 0, 00:12:08.549 "data_size": 0 00:12:08.549 }, 00:12:08.549 { 00:12:08.549 "name": "BaseBdev3", 00:12:08.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.549 "is_configured": false, 00:12:08.549 "data_offset": 0, 00:12:08.549 "data_size": 0 00:12:08.549 } 00:12:08.549 ] 00:12:08.549 }' 00:12:08.549 07:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.549 07:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.808 [2024-11-20 07:09:06.047022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:08.808 [2024-11-20 07:09:06.047084] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.808 [2024-11-20 07:09:06.055064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:08.808 [2024-11-20 07:09:06.057465] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.808 [2024-11-20 07:09:06.057522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.808 [2024-11-20 07:09:06.057539] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.808 [2024-11-20 07:09:06.057555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.808 "name": "Existed_Raid", 00:12:08.808 "uuid": "d1b2d0b4-36eb-4f59-a70a-9ece86fb2fce", 00:12:08.808 "strip_size_kb": 64, 00:12:08.808 "state": "configuring", 00:12:08.808 "raid_level": "raid0", 00:12:08.808 "superblock": true, 00:12:08.808 "num_base_bdevs": 3, 00:12:08.808 "num_base_bdevs_discovered": 1, 00:12:08.808 "num_base_bdevs_operational": 3, 00:12:08.808 "base_bdevs_list": [ 00:12:08.808 { 00:12:08.808 "name": "BaseBdev1", 00:12:08.808 "uuid": "663ae321-007e-4977-abeb-003ecd2a6762", 00:12:08.808 "is_configured": true, 00:12:08.808 "data_offset": 2048, 00:12:08.808 "data_size": 63488 00:12:08.808 }, 00:12:08.808 { 00:12:08.808 "name": "BaseBdev2", 00:12:08.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.808 "is_configured": false, 00:12:08.808 "data_offset": 0, 00:12:08.808 "data_size": 0 00:12:08.808 }, 00:12:08.808 { 00:12:08.808 "name": "BaseBdev3", 00:12:08.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.808 "is_configured": false, 00:12:08.808 "data_offset": 0, 00:12:08.808 "data_size": 0 00:12:08.808 } 00:12:08.808 ] 00:12:08.808 }' 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.808 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.376 [2024-11-20 07:09:06.641964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.376 BaseBdev2 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.376 [ 00:12:09.376 { 00:12:09.376 "name": "BaseBdev2", 00:12:09.376 "aliases": [ 00:12:09.376 "116f96e7-cddb-49f1-8735-36b48f5c493c" 00:12:09.376 ], 00:12:09.376 "product_name": "Malloc disk", 00:12:09.376 "block_size": 512, 00:12:09.376 "num_blocks": 65536, 00:12:09.376 "uuid": "116f96e7-cddb-49f1-8735-36b48f5c493c", 00:12:09.376 "assigned_rate_limits": { 00:12:09.376 "rw_ios_per_sec": 0, 00:12:09.376 "rw_mbytes_per_sec": 0, 00:12:09.376 "r_mbytes_per_sec": 0, 00:12:09.376 "w_mbytes_per_sec": 0 00:12:09.376 }, 00:12:09.376 "claimed": true, 00:12:09.376 "claim_type": "exclusive_write", 00:12:09.376 "zoned": false, 00:12:09.376 "supported_io_types": { 00:12:09.376 "read": true, 00:12:09.376 "write": true, 00:12:09.376 "unmap": true, 00:12:09.376 "flush": true, 00:12:09.376 "reset": true, 00:12:09.376 "nvme_admin": false, 00:12:09.376 "nvme_io": false, 00:12:09.376 "nvme_io_md": false, 00:12:09.376 "write_zeroes": true, 00:12:09.376 "zcopy": true, 00:12:09.376 "get_zone_info": false, 00:12:09.376 "zone_management": false, 00:12:09.376 "zone_append": false, 00:12:09.376 "compare": false, 00:12:09.376 "compare_and_write": false, 00:12:09.376 "abort": true, 00:12:09.376 "seek_hole": false, 00:12:09.376 "seek_data": false, 00:12:09.376 "copy": true, 00:12:09.376 "nvme_iov_md": false 00:12:09.376 }, 00:12:09.376 "memory_domains": [ 00:12:09.376 { 00:12:09.376 "dma_device_id": "system", 00:12:09.376 "dma_device_type": 1 00:12:09.376 }, 00:12:09.376 { 00:12:09.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.376 "dma_device_type": 2 00:12:09.376 } 00:12:09.376 ], 00:12:09.376 "driver_specific": {} 00:12:09.376 } 00:12:09.376 ] 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.376 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.635 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.635 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.635 "name": "Existed_Raid", 00:12:09.635 "uuid": "d1b2d0b4-36eb-4f59-a70a-9ece86fb2fce", 00:12:09.635 "strip_size_kb": 64, 00:12:09.635 "state": "configuring", 00:12:09.635 "raid_level": "raid0", 00:12:09.635 "superblock": true, 00:12:09.635 "num_base_bdevs": 3, 00:12:09.635 "num_base_bdevs_discovered": 2, 00:12:09.635 "num_base_bdevs_operational": 3, 00:12:09.635 "base_bdevs_list": [ 00:12:09.635 { 00:12:09.635 "name": "BaseBdev1", 00:12:09.635 "uuid": "663ae321-007e-4977-abeb-003ecd2a6762", 00:12:09.635 "is_configured": true, 00:12:09.635 "data_offset": 2048, 00:12:09.635 "data_size": 63488 00:12:09.635 }, 00:12:09.635 { 00:12:09.635 "name": "BaseBdev2", 00:12:09.635 "uuid": "116f96e7-cddb-49f1-8735-36b48f5c493c", 00:12:09.635 "is_configured": true, 00:12:09.635 "data_offset": 2048, 00:12:09.635 "data_size": 63488 00:12:09.635 }, 00:12:09.635 { 00:12:09.635 "name": "BaseBdev3", 00:12:09.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.635 "is_configured": false, 00:12:09.635 "data_offset": 0, 00:12:09.635 "data_size": 0 00:12:09.635 } 00:12:09.635 ] 00:12:09.635 }' 00:12:09.635 07:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.635 07:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.893 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:09.894 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.894 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.152 [2024-11-20 07:09:07.232423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:10.152 [2024-11-20 07:09:07.232745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:10.152 [2024-11-20 07:09:07.232778] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:10.152 BaseBdev3 00:12:10.152 [2024-11-20 07:09:07.233160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:10.152 [2024-11-20 07:09:07.233349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:10.152 [2024-11-20 07:09:07.233372] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:10.152 [2024-11-20 07:09:07.233555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.152 [ 00:12:10.152 { 00:12:10.152 "name": "BaseBdev3", 00:12:10.152 "aliases": [ 00:12:10.152 "b6bdd382-c93e-43c7-9c21-9276d26aa865" 00:12:10.152 ], 00:12:10.152 "product_name": "Malloc disk", 00:12:10.152 "block_size": 512, 00:12:10.152 "num_blocks": 65536, 00:12:10.152 "uuid": "b6bdd382-c93e-43c7-9c21-9276d26aa865", 00:12:10.152 "assigned_rate_limits": { 00:12:10.152 "rw_ios_per_sec": 0, 00:12:10.152 "rw_mbytes_per_sec": 0, 00:12:10.152 "r_mbytes_per_sec": 0, 00:12:10.152 "w_mbytes_per_sec": 0 00:12:10.152 }, 00:12:10.152 "claimed": true, 00:12:10.152 "claim_type": "exclusive_write", 00:12:10.152 "zoned": false, 00:12:10.152 "supported_io_types": { 00:12:10.152 "read": true, 00:12:10.152 "write": true, 00:12:10.152 "unmap": true, 00:12:10.152 "flush": true, 00:12:10.152 "reset": true, 00:12:10.152 "nvme_admin": false, 00:12:10.152 "nvme_io": false, 00:12:10.152 "nvme_io_md": false, 00:12:10.152 "write_zeroes": true, 00:12:10.152 "zcopy": true, 00:12:10.152 "get_zone_info": false, 00:12:10.152 "zone_management": false, 00:12:10.152 "zone_append": false, 00:12:10.152 "compare": false, 00:12:10.152 "compare_and_write": false, 00:12:10.152 "abort": true, 00:12:10.152 "seek_hole": false, 00:12:10.152 "seek_data": false, 00:12:10.152 "copy": true, 00:12:10.152 "nvme_iov_md": false 00:12:10.152 }, 00:12:10.152 "memory_domains": [ 00:12:10.152 { 00:12:10.152 "dma_device_id": "system", 00:12:10.152 "dma_device_type": 1 00:12:10.152 }, 00:12:10.152 { 00:12:10.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.152 "dma_device_type": 2 00:12:10.152 } 00:12:10.152 ], 00:12:10.152 "driver_specific": {} 00:12:10.152 } 00:12:10.152 ] 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.152 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.152 "name": "Existed_Raid", 00:12:10.152 "uuid": "d1b2d0b4-36eb-4f59-a70a-9ece86fb2fce", 00:12:10.153 "strip_size_kb": 64, 00:12:10.153 "state": "online", 00:12:10.153 "raid_level": "raid0", 00:12:10.153 "superblock": true, 00:12:10.153 "num_base_bdevs": 3, 00:12:10.153 "num_base_bdevs_discovered": 3, 00:12:10.153 "num_base_bdevs_operational": 3, 00:12:10.153 "base_bdevs_list": [ 00:12:10.153 { 00:12:10.153 "name": "BaseBdev1", 00:12:10.153 "uuid": "663ae321-007e-4977-abeb-003ecd2a6762", 00:12:10.153 "is_configured": true, 00:12:10.153 "data_offset": 2048, 00:12:10.153 "data_size": 63488 00:12:10.153 }, 00:12:10.153 { 00:12:10.153 "name": "BaseBdev2", 00:12:10.153 "uuid": "116f96e7-cddb-49f1-8735-36b48f5c493c", 00:12:10.153 "is_configured": true, 00:12:10.153 "data_offset": 2048, 00:12:10.153 "data_size": 63488 00:12:10.153 }, 00:12:10.153 { 00:12:10.153 "name": "BaseBdev3", 00:12:10.153 "uuid": "b6bdd382-c93e-43c7-9c21-9276d26aa865", 00:12:10.153 "is_configured": true, 00:12:10.153 "data_offset": 2048, 00:12:10.153 "data_size": 63488 00:12:10.153 } 00:12:10.153 ] 00:12:10.153 }' 00:12:10.153 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.153 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.720 [2024-11-20 07:09:07.781053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:10.720 "name": "Existed_Raid", 00:12:10.720 "aliases": [ 00:12:10.720 "d1b2d0b4-36eb-4f59-a70a-9ece86fb2fce" 00:12:10.720 ], 00:12:10.720 "product_name": "Raid Volume", 00:12:10.720 "block_size": 512, 00:12:10.720 "num_blocks": 190464, 00:12:10.720 "uuid": "d1b2d0b4-36eb-4f59-a70a-9ece86fb2fce", 00:12:10.720 "assigned_rate_limits": { 00:12:10.720 "rw_ios_per_sec": 0, 00:12:10.720 "rw_mbytes_per_sec": 0, 00:12:10.720 "r_mbytes_per_sec": 0, 00:12:10.720 "w_mbytes_per_sec": 0 00:12:10.720 }, 00:12:10.720 "claimed": false, 00:12:10.720 "zoned": false, 00:12:10.720 "supported_io_types": { 00:12:10.720 "read": true, 00:12:10.720 "write": true, 00:12:10.720 "unmap": true, 00:12:10.720 "flush": true, 00:12:10.720 "reset": true, 00:12:10.720 "nvme_admin": false, 00:12:10.720 "nvme_io": false, 00:12:10.720 "nvme_io_md": false, 00:12:10.720 "write_zeroes": true, 00:12:10.720 "zcopy": false, 00:12:10.720 "get_zone_info": false, 00:12:10.720 "zone_management": false, 00:12:10.720 "zone_append": false, 00:12:10.720 "compare": false, 00:12:10.720 "compare_and_write": false, 00:12:10.720 "abort": false, 00:12:10.720 "seek_hole": false, 00:12:10.720 "seek_data": false, 00:12:10.720 "copy": false, 00:12:10.720 "nvme_iov_md": false 00:12:10.720 }, 00:12:10.720 "memory_domains": [ 00:12:10.720 { 00:12:10.720 "dma_device_id": "system", 00:12:10.720 "dma_device_type": 1 00:12:10.720 }, 00:12:10.720 { 00:12:10.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.720 "dma_device_type": 2 00:12:10.720 }, 00:12:10.720 { 00:12:10.720 "dma_device_id": "system", 00:12:10.720 "dma_device_type": 1 00:12:10.720 }, 00:12:10.720 { 00:12:10.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.720 "dma_device_type": 2 00:12:10.720 }, 00:12:10.720 { 00:12:10.720 "dma_device_id": "system", 00:12:10.720 "dma_device_type": 1 00:12:10.720 }, 00:12:10.720 { 00:12:10.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.720 "dma_device_type": 2 00:12:10.720 } 00:12:10.720 ], 00:12:10.720 "driver_specific": { 00:12:10.720 "raid": { 00:12:10.720 "uuid": "d1b2d0b4-36eb-4f59-a70a-9ece86fb2fce", 00:12:10.720 "strip_size_kb": 64, 00:12:10.720 "state": "online", 00:12:10.720 "raid_level": "raid0", 00:12:10.720 "superblock": true, 00:12:10.720 "num_base_bdevs": 3, 00:12:10.720 "num_base_bdevs_discovered": 3, 00:12:10.720 "num_base_bdevs_operational": 3, 00:12:10.720 "base_bdevs_list": [ 00:12:10.720 { 00:12:10.720 "name": "BaseBdev1", 00:12:10.720 "uuid": "663ae321-007e-4977-abeb-003ecd2a6762", 00:12:10.720 "is_configured": true, 00:12:10.720 "data_offset": 2048, 00:12:10.720 "data_size": 63488 00:12:10.720 }, 00:12:10.720 { 00:12:10.720 "name": "BaseBdev2", 00:12:10.720 "uuid": "116f96e7-cddb-49f1-8735-36b48f5c493c", 00:12:10.720 "is_configured": true, 00:12:10.720 "data_offset": 2048, 00:12:10.720 "data_size": 63488 00:12:10.720 }, 00:12:10.720 { 00:12:10.720 "name": "BaseBdev3", 00:12:10.720 "uuid": "b6bdd382-c93e-43c7-9c21-9276d26aa865", 00:12:10.720 "is_configured": true, 00:12:10.720 "data_offset": 2048, 00:12:10.720 "data_size": 63488 00:12:10.720 } 00:12:10.720 ] 00:12:10.720 } 00:12:10.720 } 00:12:10.720 }' 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:10.720 BaseBdev2 00:12:10.720 BaseBdev3' 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.720 07:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.720 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.720 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.720 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:10.720 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:10.720 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:10.720 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.720 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.720 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.979 [2024-11-20 07:09:08.064818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:10.979 [2024-11-20 07:09:08.065045] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.979 [2024-11-20 07:09:08.065188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.979 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.979 "name": "Existed_Raid", 00:12:10.979 "uuid": "d1b2d0b4-36eb-4f59-a70a-9ece86fb2fce", 00:12:10.979 "strip_size_kb": 64, 00:12:10.979 "state": "offline", 00:12:10.979 "raid_level": "raid0", 00:12:10.979 "superblock": true, 00:12:10.979 "num_base_bdevs": 3, 00:12:10.979 "num_base_bdevs_discovered": 2, 00:12:10.979 "num_base_bdevs_operational": 2, 00:12:10.979 "base_bdevs_list": [ 00:12:10.979 { 00:12:10.979 "name": null, 00:12:10.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.979 "is_configured": false, 00:12:10.979 "data_offset": 0, 00:12:10.979 "data_size": 63488 00:12:10.979 }, 00:12:10.979 { 00:12:10.979 "name": "BaseBdev2", 00:12:10.979 "uuid": "116f96e7-cddb-49f1-8735-36b48f5c493c", 00:12:10.979 "is_configured": true, 00:12:10.979 "data_offset": 2048, 00:12:10.979 "data_size": 63488 00:12:10.979 }, 00:12:10.979 { 00:12:10.979 "name": "BaseBdev3", 00:12:10.979 "uuid": "b6bdd382-c93e-43c7-9c21-9276d26aa865", 00:12:10.979 "is_configured": true, 00:12:10.980 "data_offset": 2048, 00:12:10.980 "data_size": 63488 00:12:10.980 } 00:12:10.980 ] 00:12:10.980 }' 00:12:10.980 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.980 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.546 [2024-11-20 07:09:08.693493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.546 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.546 [2024-11-20 07:09:08.829976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:11.546 [2024-11-20 07:09:08.830067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.805 07:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.805 BaseBdev2 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.805 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.805 [ 00:12:11.805 { 00:12:11.805 "name": "BaseBdev2", 00:12:11.805 "aliases": [ 00:12:11.805 "235621ff-2996-4a9e-b8cc-d076aadc3701" 00:12:11.805 ], 00:12:11.805 "product_name": "Malloc disk", 00:12:11.805 "block_size": 512, 00:12:11.805 "num_blocks": 65536, 00:12:11.805 "uuid": "235621ff-2996-4a9e-b8cc-d076aadc3701", 00:12:11.805 "assigned_rate_limits": { 00:12:11.805 "rw_ios_per_sec": 0, 00:12:11.805 "rw_mbytes_per_sec": 0, 00:12:11.805 "r_mbytes_per_sec": 0, 00:12:11.805 "w_mbytes_per_sec": 0 00:12:11.805 }, 00:12:11.805 "claimed": false, 00:12:11.805 "zoned": false, 00:12:11.805 "supported_io_types": { 00:12:11.805 "read": true, 00:12:11.805 "write": true, 00:12:11.805 "unmap": true, 00:12:11.805 "flush": true, 00:12:11.805 "reset": true, 00:12:11.805 "nvme_admin": false, 00:12:11.805 "nvme_io": false, 00:12:11.805 "nvme_io_md": false, 00:12:11.805 "write_zeroes": true, 00:12:11.805 "zcopy": true, 00:12:11.805 "get_zone_info": false, 00:12:11.805 "zone_management": false, 00:12:11.805 "zone_append": false, 00:12:11.805 "compare": false, 00:12:11.805 "compare_and_write": false, 00:12:11.805 "abort": true, 00:12:11.805 "seek_hole": false, 00:12:11.805 "seek_data": false, 00:12:11.805 "copy": true, 00:12:11.805 "nvme_iov_md": false 00:12:11.806 }, 00:12:11.806 "memory_domains": [ 00:12:11.806 { 00:12:11.806 "dma_device_id": "system", 00:12:11.806 "dma_device_type": 1 00:12:11.806 }, 00:12:11.806 { 00:12:11.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.806 "dma_device_type": 2 00:12:11.806 } 00:12:11.806 ], 00:12:11.806 "driver_specific": {} 00:12:11.806 } 00:12:11.806 ] 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.806 BaseBdev3 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.806 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.063 [ 00:12:12.063 { 00:12:12.063 "name": "BaseBdev3", 00:12:12.063 "aliases": [ 00:12:12.063 "ac340b0f-5732-4141-9927-f7e7dd4e2c80" 00:12:12.063 ], 00:12:12.063 "product_name": "Malloc disk", 00:12:12.063 "block_size": 512, 00:12:12.063 "num_blocks": 65536, 00:12:12.063 "uuid": "ac340b0f-5732-4141-9927-f7e7dd4e2c80", 00:12:12.063 "assigned_rate_limits": { 00:12:12.063 "rw_ios_per_sec": 0, 00:12:12.063 "rw_mbytes_per_sec": 0, 00:12:12.063 "r_mbytes_per_sec": 0, 00:12:12.063 "w_mbytes_per_sec": 0 00:12:12.063 }, 00:12:12.063 "claimed": false, 00:12:12.063 "zoned": false, 00:12:12.063 "supported_io_types": { 00:12:12.063 "read": true, 00:12:12.063 "write": true, 00:12:12.063 "unmap": true, 00:12:12.063 "flush": true, 00:12:12.063 "reset": true, 00:12:12.063 "nvme_admin": false, 00:12:12.063 "nvme_io": false, 00:12:12.063 "nvme_io_md": false, 00:12:12.063 "write_zeroes": true, 00:12:12.063 "zcopy": true, 00:12:12.063 "get_zone_info": false, 00:12:12.063 "zone_management": false, 00:12:12.063 "zone_append": false, 00:12:12.063 "compare": false, 00:12:12.063 "compare_and_write": false, 00:12:12.063 "abort": true, 00:12:12.063 "seek_hole": false, 00:12:12.063 "seek_data": false, 00:12:12.063 "copy": true, 00:12:12.063 "nvme_iov_md": false 00:12:12.063 }, 00:12:12.063 "memory_domains": [ 00:12:12.063 { 00:12:12.063 "dma_device_id": "system", 00:12:12.063 "dma_device_type": 1 00:12:12.063 }, 00:12:12.063 { 00:12:12.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.063 "dma_device_type": 2 00:12:12.063 } 00:12:12.063 ], 00:12:12.063 "driver_specific": {} 00:12:12.063 } 00:12:12.063 ] 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.063 [2024-11-20 07:09:09.149490] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:12.063 [2024-11-20 07:09:09.149851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:12.063 [2024-11-20 07:09:09.150023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.063 [2024-11-20 07:09:09.152837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.063 "name": "Existed_Raid", 00:12:12.063 "uuid": "74778cf2-85e4-4b76-81f0-019feaa79d8a", 00:12:12.063 "strip_size_kb": 64, 00:12:12.063 "state": "configuring", 00:12:12.063 "raid_level": "raid0", 00:12:12.063 "superblock": true, 00:12:12.063 "num_base_bdevs": 3, 00:12:12.063 "num_base_bdevs_discovered": 2, 00:12:12.063 "num_base_bdevs_operational": 3, 00:12:12.063 "base_bdevs_list": [ 00:12:12.063 { 00:12:12.063 "name": "BaseBdev1", 00:12:12.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.063 "is_configured": false, 00:12:12.063 "data_offset": 0, 00:12:12.063 "data_size": 0 00:12:12.063 }, 00:12:12.063 { 00:12:12.063 "name": "BaseBdev2", 00:12:12.063 "uuid": "235621ff-2996-4a9e-b8cc-d076aadc3701", 00:12:12.063 "is_configured": true, 00:12:12.063 "data_offset": 2048, 00:12:12.063 "data_size": 63488 00:12:12.063 }, 00:12:12.063 { 00:12:12.063 "name": "BaseBdev3", 00:12:12.063 "uuid": "ac340b0f-5732-4141-9927-f7e7dd4e2c80", 00:12:12.063 "is_configured": true, 00:12:12.063 "data_offset": 2048, 00:12:12.063 "data_size": 63488 00:12:12.063 } 00:12:12.063 ] 00:12:12.063 }' 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.063 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.657 [2024-11-20 07:09:09.681552] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.657 "name": "Existed_Raid", 00:12:12.657 "uuid": "74778cf2-85e4-4b76-81f0-019feaa79d8a", 00:12:12.657 "strip_size_kb": 64, 00:12:12.657 "state": "configuring", 00:12:12.657 "raid_level": "raid0", 00:12:12.657 "superblock": true, 00:12:12.657 "num_base_bdevs": 3, 00:12:12.657 "num_base_bdevs_discovered": 1, 00:12:12.657 "num_base_bdevs_operational": 3, 00:12:12.657 "base_bdevs_list": [ 00:12:12.657 { 00:12:12.657 "name": "BaseBdev1", 00:12:12.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.657 "is_configured": false, 00:12:12.657 "data_offset": 0, 00:12:12.657 "data_size": 0 00:12:12.657 }, 00:12:12.657 { 00:12:12.657 "name": null, 00:12:12.657 "uuid": "235621ff-2996-4a9e-b8cc-d076aadc3701", 00:12:12.657 "is_configured": false, 00:12:12.657 "data_offset": 0, 00:12:12.657 "data_size": 63488 00:12:12.657 }, 00:12:12.657 { 00:12:12.657 "name": "BaseBdev3", 00:12:12.657 "uuid": "ac340b0f-5732-4141-9927-f7e7dd4e2c80", 00:12:12.657 "is_configured": true, 00:12:12.657 "data_offset": 2048, 00:12:12.657 "data_size": 63488 00:12:12.657 } 00:12:12.657 ] 00:12:12.657 }' 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.657 07:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.916 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.916 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.916 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:12.916 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.916 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.174 [2024-11-20 07:09:10.304275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:13.174 BaseBdev1 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.174 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.174 [ 00:12:13.174 { 00:12:13.174 "name": "BaseBdev1", 00:12:13.174 "aliases": [ 00:12:13.174 "71186124-c276-4d4b-9e7f-e3c24bba5e8e" 00:12:13.174 ], 00:12:13.174 "product_name": "Malloc disk", 00:12:13.174 "block_size": 512, 00:12:13.174 "num_blocks": 65536, 00:12:13.174 "uuid": "71186124-c276-4d4b-9e7f-e3c24bba5e8e", 00:12:13.174 "assigned_rate_limits": { 00:12:13.174 "rw_ios_per_sec": 0, 00:12:13.174 "rw_mbytes_per_sec": 0, 00:12:13.174 "r_mbytes_per_sec": 0, 00:12:13.174 "w_mbytes_per_sec": 0 00:12:13.174 }, 00:12:13.174 "claimed": true, 00:12:13.174 "claim_type": "exclusive_write", 00:12:13.174 "zoned": false, 00:12:13.174 "supported_io_types": { 00:12:13.175 "read": true, 00:12:13.175 "write": true, 00:12:13.175 "unmap": true, 00:12:13.175 "flush": true, 00:12:13.175 "reset": true, 00:12:13.175 "nvme_admin": false, 00:12:13.175 "nvme_io": false, 00:12:13.175 "nvme_io_md": false, 00:12:13.175 "write_zeroes": true, 00:12:13.175 "zcopy": true, 00:12:13.175 "get_zone_info": false, 00:12:13.175 "zone_management": false, 00:12:13.175 "zone_append": false, 00:12:13.175 "compare": false, 00:12:13.175 "compare_and_write": false, 00:12:13.175 "abort": true, 00:12:13.175 "seek_hole": false, 00:12:13.175 "seek_data": false, 00:12:13.175 "copy": true, 00:12:13.175 "nvme_iov_md": false 00:12:13.175 }, 00:12:13.175 "memory_domains": [ 00:12:13.175 { 00:12:13.175 "dma_device_id": "system", 00:12:13.175 "dma_device_type": 1 00:12:13.175 }, 00:12:13.175 { 00:12:13.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.175 "dma_device_type": 2 00:12:13.175 } 00:12:13.175 ], 00:12:13.175 "driver_specific": {} 00:12:13.175 } 00:12:13.175 ] 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.175 "name": "Existed_Raid", 00:12:13.175 "uuid": "74778cf2-85e4-4b76-81f0-019feaa79d8a", 00:12:13.175 "strip_size_kb": 64, 00:12:13.175 "state": "configuring", 00:12:13.175 "raid_level": "raid0", 00:12:13.175 "superblock": true, 00:12:13.175 "num_base_bdevs": 3, 00:12:13.175 "num_base_bdevs_discovered": 2, 00:12:13.175 "num_base_bdevs_operational": 3, 00:12:13.175 "base_bdevs_list": [ 00:12:13.175 { 00:12:13.175 "name": "BaseBdev1", 00:12:13.175 "uuid": "71186124-c276-4d4b-9e7f-e3c24bba5e8e", 00:12:13.175 "is_configured": true, 00:12:13.175 "data_offset": 2048, 00:12:13.175 "data_size": 63488 00:12:13.175 }, 00:12:13.175 { 00:12:13.175 "name": null, 00:12:13.175 "uuid": "235621ff-2996-4a9e-b8cc-d076aadc3701", 00:12:13.175 "is_configured": false, 00:12:13.175 "data_offset": 0, 00:12:13.175 "data_size": 63488 00:12:13.175 }, 00:12:13.175 { 00:12:13.175 "name": "BaseBdev3", 00:12:13.175 "uuid": "ac340b0f-5732-4141-9927-f7e7dd4e2c80", 00:12:13.175 "is_configured": true, 00:12:13.175 "data_offset": 2048, 00:12:13.175 "data_size": 63488 00:12:13.175 } 00:12:13.175 ] 00:12:13.175 }' 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.175 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.742 [2024-11-20 07:09:10.920584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.742 "name": "Existed_Raid", 00:12:13.742 "uuid": "74778cf2-85e4-4b76-81f0-019feaa79d8a", 00:12:13.742 "strip_size_kb": 64, 00:12:13.742 "state": "configuring", 00:12:13.742 "raid_level": "raid0", 00:12:13.742 "superblock": true, 00:12:13.742 "num_base_bdevs": 3, 00:12:13.742 "num_base_bdevs_discovered": 1, 00:12:13.742 "num_base_bdevs_operational": 3, 00:12:13.742 "base_bdevs_list": [ 00:12:13.742 { 00:12:13.742 "name": "BaseBdev1", 00:12:13.742 "uuid": "71186124-c276-4d4b-9e7f-e3c24bba5e8e", 00:12:13.742 "is_configured": true, 00:12:13.742 "data_offset": 2048, 00:12:13.742 "data_size": 63488 00:12:13.742 }, 00:12:13.742 { 00:12:13.742 "name": null, 00:12:13.742 "uuid": "235621ff-2996-4a9e-b8cc-d076aadc3701", 00:12:13.742 "is_configured": false, 00:12:13.742 "data_offset": 0, 00:12:13.742 "data_size": 63488 00:12:13.742 }, 00:12:13.742 { 00:12:13.742 "name": null, 00:12:13.742 "uuid": "ac340b0f-5732-4141-9927-f7e7dd4e2c80", 00:12:13.742 "is_configured": false, 00:12:13.742 "data_offset": 0, 00:12:13.742 "data_size": 63488 00:12:13.742 } 00:12:13.742 ] 00:12:13.742 }' 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.742 07:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.307 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:14.307 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.307 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.307 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.307 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.307 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:14.307 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.308 [2024-11-20 07:09:11.484768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.308 "name": "Existed_Raid", 00:12:14.308 "uuid": "74778cf2-85e4-4b76-81f0-019feaa79d8a", 00:12:14.308 "strip_size_kb": 64, 00:12:14.308 "state": "configuring", 00:12:14.308 "raid_level": "raid0", 00:12:14.308 "superblock": true, 00:12:14.308 "num_base_bdevs": 3, 00:12:14.308 "num_base_bdevs_discovered": 2, 00:12:14.308 "num_base_bdevs_operational": 3, 00:12:14.308 "base_bdevs_list": [ 00:12:14.308 { 00:12:14.308 "name": "BaseBdev1", 00:12:14.308 "uuid": "71186124-c276-4d4b-9e7f-e3c24bba5e8e", 00:12:14.308 "is_configured": true, 00:12:14.308 "data_offset": 2048, 00:12:14.308 "data_size": 63488 00:12:14.308 }, 00:12:14.308 { 00:12:14.308 "name": null, 00:12:14.308 "uuid": "235621ff-2996-4a9e-b8cc-d076aadc3701", 00:12:14.308 "is_configured": false, 00:12:14.308 "data_offset": 0, 00:12:14.308 "data_size": 63488 00:12:14.308 }, 00:12:14.308 { 00:12:14.308 "name": "BaseBdev3", 00:12:14.308 "uuid": "ac340b0f-5732-4141-9927-f7e7dd4e2c80", 00:12:14.308 "is_configured": true, 00:12:14.308 "data_offset": 2048, 00:12:14.308 "data_size": 63488 00:12:14.308 } 00:12:14.308 ] 00:12:14.308 }' 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.308 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.875 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.875 07:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:14.875 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.875 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.875 07:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.875 [2024-11-20 07:09:12.037041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.875 "name": "Existed_Raid", 00:12:14.875 "uuid": "74778cf2-85e4-4b76-81f0-019feaa79d8a", 00:12:14.875 "strip_size_kb": 64, 00:12:14.875 "state": "configuring", 00:12:14.875 "raid_level": "raid0", 00:12:14.875 "superblock": true, 00:12:14.875 "num_base_bdevs": 3, 00:12:14.875 "num_base_bdevs_discovered": 1, 00:12:14.875 "num_base_bdevs_operational": 3, 00:12:14.875 "base_bdevs_list": [ 00:12:14.875 { 00:12:14.875 "name": null, 00:12:14.875 "uuid": "71186124-c276-4d4b-9e7f-e3c24bba5e8e", 00:12:14.875 "is_configured": false, 00:12:14.875 "data_offset": 0, 00:12:14.875 "data_size": 63488 00:12:14.875 }, 00:12:14.875 { 00:12:14.875 "name": null, 00:12:14.875 "uuid": "235621ff-2996-4a9e-b8cc-d076aadc3701", 00:12:14.875 "is_configured": false, 00:12:14.875 "data_offset": 0, 00:12:14.875 "data_size": 63488 00:12:14.875 }, 00:12:14.875 { 00:12:14.875 "name": "BaseBdev3", 00:12:14.875 "uuid": "ac340b0f-5732-4141-9927-f7e7dd4e2c80", 00:12:14.875 "is_configured": true, 00:12:14.875 "data_offset": 2048, 00:12:14.875 "data_size": 63488 00:12:14.875 } 00:12:14.875 ] 00:12:14.875 }' 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.875 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.442 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.442 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.442 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:15.442 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.442 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.443 [2024-11-20 07:09:12.700154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.443 "name": "Existed_Raid", 00:12:15.443 "uuid": "74778cf2-85e4-4b76-81f0-019feaa79d8a", 00:12:15.443 "strip_size_kb": 64, 00:12:15.443 "state": "configuring", 00:12:15.443 "raid_level": "raid0", 00:12:15.443 "superblock": true, 00:12:15.443 "num_base_bdevs": 3, 00:12:15.443 "num_base_bdevs_discovered": 2, 00:12:15.443 "num_base_bdevs_operational": 3, 00:12:15.443 "base_bdevs_list": [ 00:12:15.443 { 00:12:15.443 "name": null, 00:12:15.443 "uuid": "71186124-c276-4d4b-9e7f-e3c24bba5e8e", 00:12:15.443 "is_configured": false, 00:12:15.443 "data_offset": 0, 00:12:15.443 "data_size": 63488 00:12:15.443 }, 00:12:15.443 { 00:12:15.443 "name": "BaseBdev2", 00:12:15.443 "uuid": "235621ff-2996-4a9e-b8cc-d076aadc3701", 00:12:15.443 "is_configured": true, 00:12:15.443 "data_offset": 2048, 00:12:15.443 "data_size": 63488 00:12:15.443 }, 00:12:15.443 { 00:12:15.443 "name": "BaseBdev3", 00:12:15.443 "uuid": "ac340b0f-5732-4141-9927-f7e7dd4e2c80", 00:12:15.443 "is_configured": true, 00:12:15.443 "data_offset": 2048, 00:12:15.443 "data_size": 63488 00:12:15.443 } 00:12:15.443 ] 00:12:15.443 }' 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.443 07:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.009 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:16.010 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.010 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.010 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.010 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.010 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:16.010 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:16.010 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.010 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.010 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.010 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.268 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 71186124-c276-4d4b-9e7f-e3c24bba5e8e 00:12:16.268 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.268 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.269 [2024-11-20 07:09:13.377828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:16.269 NewBaseBdev 00:12:16.269 [2024-11-20 07:09:13.378413] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:16.269 [2024-11-20 07:09:13.378447] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:16.269 [2024-11-20 07:09:13.378783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.269 [2024-11-20 07:09:13.379006] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:16.269 [2024-11-20 07:09:13.379025] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:16.269 [2024-11-20 07:09:13.379203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.269 [ 00:12:16.269 { 00:12:16.269 "name": "NewBaseBdev", 00:12:16.269 "aliases": [ 00:12:16.269 "71186124-c276-4d4b-9e7f-e3c24bba5e8e" 00:12:16.269 ], 00:12:16.269 "product_name": "Malloc disk", 00:12:16.269 "block_size": 512, 00:12:16.269 "num_blocks": 65536, 00:12:16.269 "uuid": "71186124-c276-4d4b-9e7f-e3c24bba5e8e", 00:12:16.269 "assigned_rate_limits": { 00:12:16.269 "rw_ios_per_sec": 0, 00:12:16.269 "rw_mbytes_per_sec": 0, 00:12:16.269 "r_mbytes_per_sec": 0, 00:12:16.269 "w_mbytes_per_sec": 0 00:12:16.269 }, 00:12:16.269 "claimed": true, 00:12:16.269 "claim_type": "exclusive_write", 00:12:16.269 "zoned": false, 00:12:16.269 "supported_io_types": { 00:12:16.269 "read": true, 00:12:16.269 "write": true, 00:12:16.269 "unmap": true, 00:12:16.269 "flush": true, 00:12:16.269 "reset": true, 00:12:16.269 "nvme_admin": false, 00:12:16.269 "nvme_io": false, 00:12:16.269 "nvme_io_md": false, 00:12:16.269 "write_zeroes": true, 00:12:16.269 "zcopy": true, 00:12:16.269 "get_zone_info": false, 00:12:16.269 "zone_management": false, 00:12:16.269 "zone_append": false, 00:12:16.269 "compare": false, 00:12:16.269 "compare_and_write": false, 00:12:16.269 "abort": true, 00:12:16.269 "seek_hole": false, 00:12:16.269 "seek_data": false, 00:12:16.269 "copy": true, 00:12:16.269 "nvme_iov_md": false 00:12:16.269 }, 00:12:16.269 "memory_domains": [ 00:12:16.269 { 00:12:16.269 "dma_device_id": "system", 00:12:16.269 "dma_device_type": 1 00:12:16.269 }, 00:12:16.269 { 00:12:16.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.269 "dma_device_type": 2 00:12:16.269 } 00:12:16.269 ], 00:12:16.269 "driver_specific": {} 00:12:16.269 } 00:12:16.269 ] 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.269 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.269 "name": "Existed_Raid", 00:12:16.269 "uuid": "74778cf2-85e4-4b76-81f0-019feaa79d8a", 00:12:16.269 "strip_size_kb": 64, 00:12:16.269 "state": "online", 00:12:16.269 "raid_level": "raid0", 00:12:16.269 "superblock": true, 00:12:16.269 "num_base_bdevs": 3, 00:12:16.269 "num_base_bdevs_discovered": 3, 00:12:16.269 "num_base_bdevs_operational": 3, 00:12:16.269 "base_bdevs_list": [ 00:12:16.269 { 00:12:16.269 "name": "NewBaseBdev", 00:12:16.269 "uuid": "71186124-c276-4d4b-9e7f-e3c24bba5e8e", 00:12:16.269 "is_configured": true, 00:12:16.269 "data_offset": 2048, 00:12:16.270 "data_size": 63488 00:12:16.270 }, 00:12:16.270 { 00:12:16.270 "name": "BaseBdev2", 00:12:16.270 "uuid": "235621ff-2996-4a9e-b8cc-d076aadc3701", 00:12:16.270 "is_configured": true, 00:12:16.270 "data_offset": 2048, 00:12:16.270 "data_size": 63488 00:12:16.270 }, 00:12:16.270 { 00:12:16.270 "name": "BaseBdev3", 00:12:16.270 "uuid": "ac340b0f-5732-4141-9927-f7e7dd4e2c80", 00:12:16.270 "is_configured": true, 00:12:16.270 "data_offset": 2048, 00:12:16.270 "data_size": 63488 00:12:16.270 } 00:12:16.270 ] 00:12:16.270 }' 00:12:16.270 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.270 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.836 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:16.836 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:16.836 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:16.836 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:16.836 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:16.836 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:16.836 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:16.836 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:16.836 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.836 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.836 [2024-11-20 07:09:13.882493] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.836 07:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.836 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:16.836 "name": "Existed_Raid", 00:12:16.837 "aliases": [ 00:12:16.837 "74778cf2-85e4-4b76-81f0-019feaa79d8a" 00:12:16.837 ], 00:12:16.837 "product_name": "Raid Volume", 00:12:16.837 "block_size": 512, 00:12:16.837 "num_blocks": 190464, 00:12:16.837 "uuid": "74778cf2-85e4-4b76-81f0-019feaa79d8a", 00:12:16.837 "assigned_rate_limits": { 00:12:16.837 "rw_ios_per_sec": 0, 00:12:16.837 "rw_mbytes_per_sec": 0, 00:12:16.837 "r_mbytes_per_sec": 0, 00:12:16.837 "w_mbytes_per_sec": 0 00:12:16.837 }, 00:12:16.837 "claimed": false, 00:12:16.837 "zoned": false, 00:12:16.837 "supported_io_types": { 00:12:16.837 "read": true, 00:12:16.837 "write": true, 00:12:16.837 "unmap": true, 00:12:16.837 "flush": true, 00:12:16.837 "reset": true, 00:12:16.837 "nvme_admin": false, 00:12:16.837 "nvme_io": false, 00:12:16.837 "nvme_io_md": false, 00:12:16.837 "write_zeroes": true, 00:12:16.837 "zcopy": false, 00:12:16.837 "get_zone_info": false, 00:12:16.837 "zone_management": false, 00:12:16.837 "zone_append": false, 00:12:16.837 "compare": false, 00:12:16.837 "compare_and_write": false, 00:12:16.837 "abort": false, 00:12:16.837 "seek_hole": false, 00:12:16.837 "seek_data": false, 00:12:16.837 "copy": false, 00:12:16.837 "nvme_iov_md": false 00:12:16.837 }, 00:12:16.837 "memory_domains": [ 00:12:16.837 { 00:12:16.837 "dma_device_id": "system", 00:12:16.837 "dma_device_type": 1 00:12:16.837 }, 00:12:16.837 { 00:12:16.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.837 "dma_device_type": 2 00:12:16.837 }, 00:12:16.837 { 00:12:16.837 "dma_device_id": "system", 00:12:16.837 "dma_device_type": 1 00:12:16.837 }, 00:12:16.837 { 00:12:16.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.837 "dma_device_type": 2 00:12:16.837 }, 00:12:16.837 { 00:12:16.837 "dma_device_id": "system", 00:12:16.837 "dma_device_type": 1 00:12:16.837 }, 00:12:16.837 { 00:12:16.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.837 "dma_device_type": 2 00:12:16.837 } 00:12:16.837 ], 00:12:16.837 "driver_specific": { 00:12:16.837 "raid": { 00:12:16.837 "uuid": "74778cf2-85e4-4b76-81f0-019feaa79d8a", 00:12:16.837 "strip_size_kb": 64, 00:12:16.837 "state": "online", 00:12:16.837 "raid_level": "raid0", 00:12:16.837 "superblock": true, 00:12:16.837 "num_base_bdevs": 3, 00:12:16.837 "num_base_bdevs_discovered": 3, 00:12:16.837 "num_base_bdevs_operational": 3, 00:12:16.837 "base_bdevs_list": [ 00:12:16.837 { 00:12:16.837 "name": "NewBaseBdev", 00:12:16.837 "uuid": "71186124-c276-4d4b-9e7f-e3c24bba5e8e", 00:12:16.837 "is_configured": true, 00:12:16.837 "data_offset": 2048, 00:12:16.837 "data_size": 63488 00:12:16.837 }, 00:12:16.837 { 00:12:16.837 "name": "BaseBdev2", 00:12:16.837 "uuid": "235621ff-2996-4a9e-b8cc-d076aadc3701", 00:12:16.837 "is_configured": true, 00:12:16.837 "data_offset": 2048, 00:12:16.837 "data_size": 63488 00:12:16.837 }, 00:12:16.837 { 00:12:16.837 "name": "BaseBdev3", 00:12:16.837 "uuid": "ac340b0f-5732-4141-9927-f7e7dd4e2c80", 00:12:16.837 "is_configured": true, 00:12:16.837 "data_offset": 2048, 00:12:16.837 "data_size": 63488 00:12:16.837 } 00:12:16.837 ] 00:12:16.837 } 00:12:16.837 } 00:12:16.837 }' 00:12:16.837 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:16.837 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:16.837 BaseBdev2 00:12:16.837 BaseBdev3' 00:12:16.837 07:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.837 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.095 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.095 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:17.095 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.096 [2024-11-20 07:09:14.202275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:17.096 [2024-11-20 07:09:14.202675] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.096 [2024-11-20 07:09:14.202881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.096 [2024-11-20 07:09:14.202990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.096 [2024-11-20 07:09:14.203014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64371 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64371 ']' 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64371 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64371 00:12:17.096 killing process with pid 64371 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64371' 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64371 00:12:17.096 07:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64371 00:12:17.096 [2024-11-20 07:09:14.246969] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.355 [2024-11-20 07:09:14.542614] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:18.731 07:09:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:18.731 00:12:18.731 real 0m11.832s 00:12:18.731 user 0m19.492s 00:12:18.731 sys 0m1.602s 00:12:18.731 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.731 ************************************ 00:12:18.731 END TEST raid_state_function_test_sb 00:12:18.731 ************************************ 00:12:18.731 07:09:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.731 07:09:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:12:18.731 07:09:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:18.731 07:09:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.731 07:09:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.731 ************************************ 00:12:18.731 START TEST raid_superblock_test 00:12:18.731 ************************************ 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65008 00:12:18.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65008 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65008 ']' 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.731 07:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.731 [2024-11-20 07:09:15.841265] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:12:18.731 [2024-11-20 07:09:15.841508] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65008 ] 00:12:18.731 [2024-11-20 07:09:16.022712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.989 [2024-11-20 07:09:16.195464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.248 [2024-11-20 07:09:16.399039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.248 [2024-11-20 07:09:16.399289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.816 malloc1 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.816 [2024-11-20 07:09:16.899676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:19.816 [2024-11-20 07:09:16.899783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.816 [2024-11-20 07:09:16.899820] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:19.816 [2024-11-20 07:09:16.899835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.816 [2024-11-20 07:09:16.902779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.816 [2024-11-20 07:09:16.902978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:19.816 pt1 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.816 malloc2 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.816 [2024-11-20 07:09:16.955861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:19.816 [2024-11-20 07:09:16.955946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.816 [2024-11-20 07:09:16.955977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:19.816 [2024-11-20 07:09:16.955991] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.816 [2024-11-20 07:09:16.958676] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.816 [2024-11-20 07:09:16.958846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:19.816 pt2 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:19.816 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:19.817 07:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:19.817 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.817 07:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.817 malloc3 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.817 [2024-11-20 07:09:17.021239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:19.817 [2024-11-20 07:09:17.021306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.817 [2024-11-20 07:09:17.021339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:19.817 [2024-11-20 07:09:17.021354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.817 [2024-11-20 07:09:17.024110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.817 [2024-11-20 07:09:17.024153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:19.817 pt3 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.817 [2024-11-20 07:09:17.029274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:19.817 [2024-11-20 07:09:17.031772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:19.817 [2024-11-20 07:09:17.031882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:19.817 [2024-11-20 07:09:17.032092] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:19.817 [2024-11-20 07:09:17.032116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:19.817 [2024-11-20 07:09:17.032421] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:19.817 [2024-11-20 07:09:17.032631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:19.817 [2024-11-20 07:09:17.032647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:19.817 [2024-11-20 07:09:17.032831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.817 "name": "raid_bdev1", 00:12:19.817 "uuid": "24cdde79-4534-4e73-9384-691a1409935f", 00:12:19.817 "strip_size_kb": 64, 00:12:19.817 "state": "online", 00:12:19.817 "raid_level": "raid0", 00:12:19.817 "superblock": true, 00:12:19.817 "num_base_bdevs": 3, 00:12:19.817 "num_base_bdevs_discovered": 3, 00:12:19.817 "num_base_bdevs_operational": 3, 00:12:19.817 "base_bdevs_list": [ 00:12:19.817 { 00:12:19.817 "name": "pt1", 00:12:19.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.817 "is_configured": true, 00:12:19.817 "data_offset": 2048, 00:12:19.817 "data_size": 63488 00:12:19.817 }, 00:12:19.817 { 00:12:19.817 "name": "pt2", 00:12:19.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.817 "is_configured": true, 00:12:19.817 "data_offset": 2048, 00:12:19.817 "data_size": 63488 00:12:19.817 }, 00:12:19.817 { 00:12:19.817 "name": "pt3", 00:12:19.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.817 "is_configured": true, 00:12:19.817 "data_offset": 2048, 00:12:19.817 "data_size": 63488 00:12:19.817 } 00:12:19.817 ] 00:12:19.817 }' 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.817 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.383 [2024-11-20 07:09:17.569778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:20.383 "name": "raid_bdev1", 00:12:20.383 "aliases": [ 00:12:20.383 "24cdde79-4534-4e73-9384-691a1409935f" 00:12:20.383 ], 00:12:20.383 "product_name": "Raid Volume", 00:12:20.383 "block_size": 512, 00:12:20.383 "num_blocks": 190464, 00:12:20.383 "uuid": "24cdde79-4534-4e73-9384-691a1409935f", 00:12:20.383 "assigned_rate_limits": { 00:12:20.383 "rw_ios_per_sec": 0, 00:12:20.383 "rw_mbytes_per_sec": 0, 00:12:20.383 "r_mbytes_per_sec": 0, 00:12:20.383 "w_mbytes_per_sec": 0 00:12:20.383 }, 00:12:20.383 "claimed": false, 00:12:20.383 "zoned": false, 00:12:20.383 "supported_io_types": { 00:12:20.383 "read": true, 00:12:20.383 "write": true, 00:12:20.383 "unmap": true, 00:12:20.383 "flush": true, 00:12:20.383 "reset": true, 00:12:20.383 "nvme_admin": false, 00:12:20.383 "nvme_io": false, 00:12:20.383 "nvme_io_md": false, 00:12:20.383 "write_zeroes": true, 00:12:20.383 "zcopy": false, 00:12:20.383 "get_zone_info": false, 00:12:20.383 "zone_management": false, 00:12:20.383 "zone_append": false, 00:12:20.383 "compare": false, 00:12:20.383 "compare_and_write": false, 00:12:20.383 "abort": false, 00:12:20.383 "seek_hole": false, 00:12:20.383 "seek_data": false, 00:12:20.383 "copy": false, 00:12:20.383 "nvme_iov_md": false 00:12:20.383 }, 00:12:20.383 "memory_domains": [ 00:12:20.383 { 00:12:20.383 "dma_device_id": "system", 00:12:20.383 "dma_device_type": 1 00:12:20.383 }, 00:12:20.383 { 00:12:20.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.383 "dma_device_type": 2 00:12:20.383 }, 00:12:20.383 { 00:12:20.383 "dma_device_id": "system", 00:12:20.383 "dma_device_type": 1 00:12:20.383 }, 00:12:20.383 { 00:12:20.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.383 "dma_device_type": 2 00:12:20.383 }, 00:12:20.383 { 00:12:20.383 "dma_device_id": "system", 00:12:20.383 "dma_device_type": 1 00:12:20.383 }, 00:12:20.383 { 00:12:20.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:20.383 "dma_device_type": 2 00:12:20.383 } 00:12:20.383 ], 00:12:20.383 "driver_specific": { 00:12:20.383 "raid": { 00:12:20.383 "uuid": "24cdde79-4534-4e73-9384-691a1409935f", 00:12:20.383 "strip_size_kb": 64, 00:12:20.383 "state": "online", 00:12:20.383 "raid_level": "raid0", 00:12:20.383 "superblock": true, 00:12:20.383 "num_base_bdevs": 3, 00:12:20.383 "num_base_bdevs_discovered": 3, 00:12:20.383 "num_base_bdevs_operational": 3, 00:12:20.383 "base_bdevs_list": [ 00:12:20.383 { 00:12:20.383 "name": "pt1", 00:12:20.383 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.383 "is_configured": true, 00:12:20.383 "data_offset": 2048, 00:12:20.383 "data_size": 63488 00:12:20.383 }, 00:12:20.383 { 00:12:20.383 "name": "pt2", 00:12:20.383 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.383 "is_configured": true, 00:12:20.383 "data_offset": 2048, 00:12:20.383 "data_size": 63488 00:12:20.383 }, 00:12:20.383 { 00:12:20.383 "name": "pt3", 00:12:20.383 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.383 "is_configured": true, 00:12:20.383 "data_offset": 2048, 00:12:20.383 "data_size": 63488 00:12:20.383 } 00:12:20.383 ] 00:12:20.383 } 00:12:20.383 } 00:12:20.383 }' 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:20.383 pt2 00:12:20.383 pt3' 00:12:20.383 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.641 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:20.642 [2024-11-20 07:09:17.869813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=24cdde79-4534-4e73-9384-691a1409935f 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 24cdde79-4534-4e73-9384-691a1409935f ']' 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.642 [2024-11-20 07:09:17.917470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:20.642 [2024-11-20 07:09:17.917506] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:20.642 [2024-11-20 07:09:17.917607] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:20.642 [2024-11-20 07:09:17.917690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:20.642 [2024-11-20 07:09:17.917706] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.642 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.899 07:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:20.899 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.900 [2024-11-20 07:09:18.069599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:20.900 [2024-11-20 07:09:18.072091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:20.900 [2024-11-20 07:09:18.072287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:20.900 [2024-11-20 07:09:18.072374] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:20.900 [2024-11-20 07:09:18.072446] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:20.900 [2024-11-20 07:09:18.072480] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:20.900 [2024-11-20 07:09:18.072509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:20.900 [2024-11-20 07:09:18.072526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:20.900 request: 00:12:20.900 { 00:12:20.900 "name": "raid_bdev1", 00:12:20.900 "raid_level": "raid0", 00:12:20.900 "base_bdevs": [ 00:12:20.900 "malloc1", 00:12:20.900 "malloc2", 00:12:20.900 "malloc3" 00:12:20.900 ], 00:12:20.900 "strip_size_kb": 64, 00:12:20.900 "superblock": false, 00:12:20.900 "method": "bdev_raid_create", 00:12:20.900 "req_id": 1 00:12:20.900 } 00:12:20.900 Got JSON-RPC error response 00:12:20.900 response: 00:12:20.900 { 00:12:20.900 "code": -17, 00:12:20.900 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:20.900 } 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.900 [2024-11-20 07:09:18.137529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:20.900 [2024-11-20 07:09:18.137723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.900 [2024-11-20 07:09:18.137801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:20.900 [2024-11-20 07:09:18.138025] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.900 [2024-11-20 07:09:18.140937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.900 [2024-11-20 07:09:18.141086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:20.900 [2024-11-20 07:09:18.141295] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:20.900 [2024-11-20 07:09:18.141465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:20.900 pt1 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.900 "name": "raid_bdev1", 00:12:20.900 "uuid": "24cdde79-4534-4e73-9384-691a1409935f", 00:12:20.900 "strip_size_kb": 64, 00:12:20.900 "state": "configuring", 00:12:20.900 "raid_level": "raid0", 00:12:20.900 "superblock": true, 00:12:20.900 "num_base_bdevs": 3, 00:12:20.900 "num_base_bdevs_discovered": 1, 00:12:20.900 "num_base_bdevs_operational": 3, 00:12:20.900 "base_bdevs_list": [ 00:12:20.900 { 00:12:20.900 "name": "pt1", 00:12:20.900 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.900 "is_configured": true, 00:12:20.900 "data_offset": 2048, 00:12:20.900 "data_size": 63488 00:12:20.900 }, 00:12:20.900 { 00:12:20.900 "name": null, 00:12:20.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.900 "is_configured": false, 00:12:20.900 "data_offset": 2048, 00:12:20.900 "data_size": 63488 00:12:20.900 }, 00:12:20.900 { 00:12:20.900 "name": null, 00:12:20.900 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.900 "is_configured": false, 00:12:20.900 "data_offset": 2048, 00:12:20.900 "data_size": 63488 00:12:20.900 } 00:12:20.900 ] 00:12:20.900 }' 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.900 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.465 [2024-11-20 07:09:18.637971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:21.465 [2024-11-20 07:09:18.638047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.465 [2024-11-20 07:09:18.638081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:21.465 [2024-11-20 07:09:18.638095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.465 [2024-11-20 07:09:18.638644] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.465 [2024-11-20 07:09:18.638675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:21.465 [2024-11-20 07:09:18.638786] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:21.465 [2024-11-20 07:09:18.638817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:21.465 pt2 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.465 [2024-11-20 07:09:18.645965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.465 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.466 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.466 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.466 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.466 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.466 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.466 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.466 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.466 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.466 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.466 "name": "raid_bdev1", 00:12:21.466 "uuid": "24cdde79-4534-4e73-9384-691a1409935f", 00:12:21.466 "strip_size_kb": 64, 00:12:21.466 "state": "configuring", 00:12:21.466 "raid_level": "raid0", 00:12:21.466 "superblock": true, 00:12:21.466 "num_base_bdevs": 3, 00:12:21.466 "num_base_bdevs_discovered": 1, 00:12:21.466 "num_base_bdevs_operational": 3, 00:12:21.466 "base_bdevs_list": [ 00:12:21.466 { 00:12:21.466 "name": "pt1", 00:12:21.466 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.466 "is_configured": true, 00:12:21.466 "data_offset": 2048, 00:12:21.466 "data_size": 63488 00:12:21.466 }, 00:12:21.466 { 00:12:21.466 "name": null, 00:12:21.466 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.466 "is_configured": false, 00:12:21.466 "data_offset": 0, 00:12:21.466 "data_size": 63488 00:12:21.466 }, 00:12:21.466 { 00:12:21.466 "name": null, 00:12:21.466 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:21.466 "is_configured": false, 00:12:21.466 "data_offset": 2048, 00:12:21.466 "data_size": 63488 00:12:21.466 } 00:12:21.466 ] 00:12:21.466 }' 00:12:21.466 07:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.466 07:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.030 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:22.030 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:22.030 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:22.030 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.030 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.030 [2024-11-20 07:09:19.198150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:22.030 [2024-11-20 07:09:19.198239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.030 [2024-11-20 07:09:19.198267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:12:22.030 [2024-11-20 07:09:19.198284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.030 [2024-11-20 07:09:19.198851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.030 [2024-11-20 07:09:19.198901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:22.030 [2024-11-20 07:09:19.199004] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:22.030 [2024-11-20 07:09:19.199042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:22.030 pt2 00:12:22.030 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.030 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:22.030 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:22.030 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:22.030 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.031 [2024-11-20 07:09:19.206120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:22.031 [2024-11-20 07:09:19.206309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.031 [2024-11-20 07:09:19.206340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:22.031 [2024-11-20 07:09:19.206357] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.031 [2024-11-20 07:09:19.206812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.031 [2024-11-20 07:09:19.206845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:22.031 [2024-11-20 07:09:19.206938] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:22.031 [2024-11-20 07:09:19.206972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:22.031 [2024-11-20 07:09:19.207113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:22.031 [2024-11-20 07:09:19.207133] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:22.031 [2024-11-20 07:09:19.207436] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:22.031 [2024-11-20 07:09:19.207629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:22.031 [2024-11-20 07:09:19.207644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:22.031 [2024-11-20 07:09:19.207803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.031 pt3 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.031 "name": "raid_bdev1", 00:12:22.031 "uuid": "24cdde79-4534-4e73-9384-691a1409935f", 00:12:22.031 "strip_size_kb": 64, 00:12:22.031 "state": "online", 00:12:22.031 "raid_level": "raid0", 00:12:22.031 "superblock": true, 00:12:22.031 "num_base_bdevs": 3, 00:12:22.031 "num_base_bdevs_discovered": 3, 00:12:22.031 "num_base_bdevs_operational": 3, 00:12:22.031 "base_bdevs_list": [ 00:12:22.031 { 00:12:22.031 "name": "pt1", 00:12:22.031 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.031 "is_configured": true, 00:12:22.031 "data_offset": 2048, 00:12:22.031 "data_size": 63488 00:12:22.031 }, 00:12:22.031 { 00:12:22.031 "name": "pt2", 00:12:22.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.031 "is_configured": true, 00:12:22.031 "data_offset": 2048, 00:12:22.031 "data_size": 63488 00:12:22.031 }, 00:12:22.031 { 00:12:22.031 "name": "pt3", 00:12:22.031 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.031 "is_configured": true, 00:12:22.031 "data_offset": 2048, 00:12:22.031 "data_size": 63488 00:12:22.031 } 00:12:22.031 ] 00:12:22.031 }' 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.031 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.597 [2024-11-20 07:09:19.726693] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.597 "name": "raid_bdev1", 00:12:22.597 "aliases": [ 00:12:22.597 "24cdde79-4534-4e73-9384-691a1409935f" 00:12:22.597 ], 00:12:22.597 "product_name": "Raid Volume", 00:12:22.597 "block_size": 512, 00:12:22.597 "num_blocks": 190464, 00:12:22.597 "uuid": "24cdde79-4534-4e73-9384-691a1409935f", 00:12:22.597 "assigned_rate_limits": { 00:12:22.597 "rw_ios_per_sec": 0, 00:12:22.597 "rw_mbytes_per_sec": 0, 00:12:22.597 "r_mbytes_per_sec": 0, 00:12:22.597 "w_mbytes_per_sec": 0 00:12:22.597 }, 00:12:22.597 "claimed": false, 00:12:22.597 "zoned": false, 00:12:22.597 "supported_io_types": { 00:12:22.597 "read": true, 00:12:22.597 "write": true, 00:12:22.597 "unmap": true, 00:12:22.597 "flush": true, 00:12:22.597 "reset": true, 00:12:22.597 "nvme_admin": false, 00:12:22.597 "nvme_io": false, 00:12:22.597 "nvme_io_md": false, 00:12:22.597 "write_zeroes": true, 00:12:22.597 "zcopy": false, 00:12:22.597 "get_zone_info": false, 00:12:22.597 "zone_management": false, 00:12:22.597 "zone_append": false, 00:12:22.597 "compare": false, 00:12:22.597 "compare_and_write": false, 00:12:22.597 "abort": false, 00:12:22.597 "seek_hole": false, 00:12:22.597 "seek_data": false, 00:12:22.597 "copy": false, 00:12:22.597 "nvme_iov_md": false 00:12:22.597 }, 00:12:22.597 "memory_domains": [ 00:12:22.597 { 00:12:22.597 "dma_device_id": "system", 00:12:22.597 "dma_device_type": 1 00:12:22.597 }, 00:12:22.597 { 00:12:22.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.597 "dma_device_type": 2 00:12:22.597 }, 00:12:22.597 { 00:12:22.597 "dma_device_id": "system", 00:12:22.597 "dma_device_type": 1 00:12:22.597 }, 00:12:22.597 { 00:12:22.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.597 "dma_device_type": 2 00:12:22.597 }, 00:12:22.597 { 00:12:22.597 "dma_device_id": "system", 00:12:22.597 "dma_device_type": 1 00:12:22.597 }, 00:12:22.597 { 00:12:22.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.597 "dma_device_type": 2 00:12:22.597 } 00:12:22.597 ], 00:12:22.597 "driver_specific": { 00:12:22.597 "raid": { 00:12:22.597 "uuid": "24cdde79-4534-4e73-9384-691a1409935f", 00:12:22.597 "strip_size_kb": 64, 00:12:22.597 "state": "online", 00:12:22.597 "raid_level": "raid0", 00:12:22.597 "superblock": true, 00:12:22.597 "num_base_bdevs": 3, 00:12:22.597 "num_base_bdevs_discovered": 3, 00:12:22.597 "num_base_bdevs_operational": 3, 00:12:22.597 "base_bdevs_list": [ 00:12:22.597 { 00:12:22.597 "name": "pt1", 00:12:22.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.597 "is_configured": true, 00:12:22.597 "data_offset": 2048, 00:12:22.597 "data_size": 63488 00:12:22.597 }, 00:12:22.597 { 00:12:22.597 "name": "pt2", 00:12:22.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.597 "is_configured": true, 00:12:22.597 "data_offset": 2048, 00:12:22.597 "data_size": 63488 00:12:22.597 }, 00:12:22.597 { 00:12:22.597 "name": "pt3", 00:12:22.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.597 "is_configured": true, 00:12:22.597 "data_offset": 2048, 00:12:22.597 "data_size": 63488 00:12:22.597 } 00:12:22.597 ] 00:12:22.597 } 00:12:22.597 } 00:12:22.597 }' 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:22.597 pt2 00:12:22.597 pt3' 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.597 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.856 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.856 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.856 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.856 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:22.856 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.856 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.856 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.856 07:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.856 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.856 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.856 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.856 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.856 07:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.856 [2024-11-20 07:09:20.058770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 24cdde79-4534-4e73-9384-691a1409935f '!=' 24cdde79-4534-4e73-9384-691a1409935f ']' 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65008 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65008 ']' 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65008 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65008 00:12:22.856 killing process with pid 65008 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65008' 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65008 00:12:22.856 [2024-11-20 07:09:20.133015] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:22.856 07:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65008 00:12:22.856 [2024-11-20 07:09:20.133155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:22.856 [2024-11-20 07:09:20.133234] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:22.856 [2024-11-20 07:09:20.133252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:23.114 [2024-11-20 07:09:20.403619] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:24.486 07:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:24.486 00:12:24.486 real 0m5.680s 00:12:24.486 user 0m8.538s 00:12:24.486 sys 0m0.859s 00:12:24.486 07:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.486 07:09:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.486 ************************************ 00:12:24.486 END TEST raid_superblock_test 00:12:24.486 ************************************ 00:12:24.486 07:09:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:12:24.486 07:09:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:24.486 07:09:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.486 07:09:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:24.486 ************************************ 00:12:24.486 START TEST raid_read_error_test 00:12:24.486 ************************************ 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.486 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fwO1zvOVj5 00:12:24.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65262 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65262 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65262 ']' 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.487 07:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.487 [2024-11-20 07:09:21.576081] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:12:24.487 [2024-11-20 07:09:21.576258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65262 ] 00:12:24.487 [2024-11-20 07:09:21.764332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.744 [2024-11-20 07:09:21.928753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.001 [2024-11-20 07:09:22.133959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.001 [2024-11-20 07:09:22.134038] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.259 BaseBdev1_malloc 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.259 true 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.259 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.518 [2024-11-20 07:09:22.579690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:25.518 [2024-11-20 07:09:22.579764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.518 [2024-11-20 07:09:22.579794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:25.518 [2024-11-20 07:09:22.579812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.518 [2024-11-20 07:09:22.582591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.518 [2024-11-20 07:09:22.582643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:25.518 BaseBdev1 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.518 BaseBdev2_malloc 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.518 true 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.518 [2024-11-20 07:09:22.639631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:25.518 [2024-11-20 07:09:22.639859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.518 [2024-11-20 07:09:22.639909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:25.518 [2024-11-20 07:09:22.639928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.518 [2024-11-20 07:09:22.642673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.518 [2024-11-20 07:09:22.642724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:25.518 BaseBdev2 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.518 BaseBdev3_malloc 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.518 true 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.518 [2024-11-20 07:09:22.703415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:25.518 [2024-11-20 07:09:22.703626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.518 [2024-11-20 07:09:22.703666] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:25.518 [2024-11-20 07:09:22.703686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.518 [2024-11-20 07:09:22.706584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.518 [2024-11-20 07:09:22.706755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:25.518 BaseBdev3 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.518 [2024-11-20 07:09:22.711699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.518 [2024-11-20 07:09:22.714228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.518 [2024-11-20 07:09:22.714344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:25.518 [2024-11-20 07:09:22.714623] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:25.518 [2024-11-20 07:09:22.714645] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:25.518 [2024-11-20 07:09:22.715009] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:25.518 [2024-11-20 07:09:22.715233] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:25.518 [2024-11-20 07:09:22.715264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:25.518 [2024-11-20 07:09:22.715460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.518 "name": "raid_bdev1", 00:12:25.518 "uuid": "26d8c520-5a74-4f16-b377-dfc9849e8ce3", 00:12:25.518 "strip_size_kb": 64, 00:12:25.518 "state": "online", 00:12:25.518 "raid_level": "raid0", 00:12:25.518 "superblock": true, 00:12:25.518 "num_base_bdevs": 3, 00:12:25.518 "num_base_bdevs_discovered": 3, 00:12:25.518 "num_base_bdevs_operational": 3, 00:12:25.518 "base_bdevs_list": [ 00:12:25.518 { 00:12:25.518 "name": "BaseBdev1", 00:12:25.518 "uuid": "62744366-7cc3-5951-88e2-435519f9ed8d", 00:12:25.518 "is_configured": true, 00:12:25.518 "data_offset": 2048, 00:12:25.518 "data_size": 63488 00:12:25.518 }, 00:12:25.518 { 00:12:25.518 "name": "BaseBdev2", 00:12:25.518 "uuid": "1db31572-f431-5546-9d1b-19c63f800b41", 00:12:25.518 "is_configured": true, 00:12:25.518 "data_offset": 2048, 00:12:25.518 "data_size": 63488 00:12:25.518 }, 00:12:25.518 { 00:12:25.518 "name": "BaseBdev3", 00:12:25.518 "uuid": "a573d58f-100a-56c2-a4a0-21a5165ede33", 00:12:25.518 "is_configured": true, 00:12:25.518 "data_offset": 2048, 00:12:25.518 "data_size": 63488 00:12:25.518 } 00:12:25.518 ] 00:12:25.518 }' 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.518 07:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.085 07:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:26.085 07:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:26.085 [2024-11-20 07:09:23.397285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:27.032 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:27.032 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.032 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.032 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.032 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:27.032 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:27.032 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.033 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.323 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.323 "name": "raid_bdev1", 00:12:27.323 "uuid": "26d8c520-5a74-4f16-b377-dfc9849e8ce3", 00:12:27.323 "strip_size_kb": 64, 00:12:27.323 "state": "online", 00:12:27.323 "raid_level": "raid0", 00:12:27.323 "superblock": true, 00:12:27.323 "num_base_bdevs": 3, 00:12:27.323 "num_base_bdevs_discovered": 3, 00:12:27.323 "num_base_bdevs_operational": 3, 00:12:27.323 "base_bdevs_list": [ 00:12:27.323 { 00:12:27.323 "name": "BaseBdev1", 00:12:27.323 "uuid": "62744366-7cc3-5951-88e2-435519f9ed8d", 00:12:27.323 "is_configured": true, 00:12:27.323 "data_offset": 2048, 00:12:27.323 "data_size": 63488 00:12:27.323 }, 00:12:27.323 { 00:12:27.323 "name": "BaseBdev2", 00:12:27.323 "uuid": "1db31572-f431-5546-9d1b-19c63f800b41", 00:12:27.323 "is_configured": true, 00:12:27.323 "data_offset": 2048, 00:12:27.323 "data_size": 63488 00:12:27.323 }, 00:12:27.323 { 00:12:27.323 "name": "BaseBdev3", 00:12:27.323 "uuid": "a573d58f-100a-56c2-a4a0-21a5165ede33", 00:12:27.323 "is_configured": true, 00:12:27.323 "data_offset": 2048, 00:12:27.323 "data_size": 63488 00:12:27.323 } 00:12:27.323 ] 00:12:27.323 }' 00:12:27.323 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.323 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.581 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:27.581 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.581 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.581 [2024-11-20 07:09:24.852416] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.581 [2024-11-20 07:09:24.852619] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.581 [2024-11-20 07:09:24.856106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.581 [2024-11-20 07:09:24.856311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.581 [2024-11-20 07:09:24.856420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:12:27.581 "results": [ 00:12:27.581 { 00:12:27.581 "job": "raid_bdev1", 00:12:27.582 "core_mask": "0x1", 00:12:27.582 "workload": "randrw", 00:12:27.582 "percentage": 50, 00:12:27.582 "status": "finished", 00:12:27.582 "queue_depth": 1, 00:12:27.582 "io_size": 131072, 00:12:27.582 "runtime": 1.453042, 00:12:27.582 "iops": 10403.691015125509, 00:12:27.582 "mibps": 1300.4613768906886, 00:12:27.582 "io_failed": 1, 00:12:27.582 "io_timeout": 0, 00:12:27.582 "avg_latency_us": 133.99898495471982, 00:12:27.582 "min_latency_us": 30.487272727272728, 00:12:27.582 "max_latency_us": 1861.8181818181818 00:12:27.582 } 00:12:27.582 ], 00:12:27.582 "core_count": 1 00:12:27.582 } 00:12:27.582 ee all in destruct 00:12:27.582 [2024-11-20 07:09:24.856616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:27.582 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.582 07:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65262 00:12:27.582 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65262 ']' 00:12:27.582 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65262 00:12:27.582 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:27.582 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.582 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65262 00:12:27.582 killing process with pid 65262 00:12:27.582 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.582 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.582 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65262' 00:12:27.582 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65262 00:12:27.582 07:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65262 00:12:27.582 [2024-11-20 07:09:24.893500] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.840 [2024-11-20 07:09:25.103804] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.212 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fwO1zvOVj5 00:12:29.212 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:29.212 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:29.212 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:12:29.212 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:29.212 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:29.212 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:29.212 07:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:12:29.212 00:12:29.212 real 0m4.726s 00:12:29.212 user 0m5.925s 00:12:29.212 sys 0m0.569s 00:12:29.212 07:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.212 07:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.212 ************************************ 00:12:29.212 END TEST raid_read_error_test 00:12:29.212 ************************************ 00:12:29.212 07:09:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:12:29.212 07:09:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:29.212 07:09:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.212 07:09:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.212 ************************************ 00:12:29.212 START TEST raid_write_error_test 00:12:29.212 ************************************ 00:12:29.212 07:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BU2r0EJwOL 00:12:29.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65408 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65408 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65408 ']' 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.213 07:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.213 [2024-11-20 07:09:26.384732] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:12:29.213 [2024-11-20 07:09:26.384950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65408 ] 00:12:29.471 [2024-11-20 07:09:26.572770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.471 [2024-11-20 07:09:26.700771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.728 [2024-11-20 07:09:26.925471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.728 [2024-11-20 07:09:26.925550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.986 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.986 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:29.986 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:29.986 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:29.986 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.986 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.245 BaseBdev1_malloc 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.245 true 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.245 [2024-11-20 07:09:27.345387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:30.245 [2024-11-20 07:09:27.345474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.245 [2024-11-20 07:09:27.345505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:30.245 [2024-11-20 07:09:27.345523] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.245 [2024-11-20 07:09:27.348331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.245 [2024-11-20 07:09:27.348388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:30.245 BaseBdev1 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.245 BaseBdev2_malloc 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.245 true 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.245 [2024-11-20 07:09:27.405137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:30.245 [2024-11-20 07:09:27.405216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.245 [2024-11-20 07:09:27.405241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:30.245 [2024-11-20 07:09:27.405259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.245 [2024-11-20 07:09:27.408014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.245 [2024-11-20 07:09:27.408064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:30.245 BaseBdev2 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.245 BaseBdev3_malloc 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.245 true 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.245 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.245 [2024-11-20 07:09:27.479595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:30.245 [2024-11-20 07:09:27.479674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.245 [2024-11-20 07:09:27.479701] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:30.245 [2024-11-20 07:09:27.479719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.245 [2024-11-20 07:09:27.482479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.245 [2024-11-20 07:09:27.482530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:30.245 BaseBdev3 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.246 [2024-11-20 07:09:27.487686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.246 [2024-11-20 07:09:27.490290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.246 [2024-11-20 07:09:27.490405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:30.246 [2024-11-20 07:09:27.490678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:30.246 [2024-11-20 07:09:27.490700] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:30.246 [2024-11-20 07:09:27.491045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:30.246 [2024-11-20 07:09:27.491260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:30.246 [2024-11-20 07:09:27.491384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:30.246 [2024-11-20 07:09:27.491661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.246 "name": "raid_bdev1", 00:12:30.246 "uuid": "45df67a4-35c8-4bc3-9038-d2d833253274", 00:12:30.246 "strip_size_kb": 64, 00:12:30.246 "state": "online", 00:12:30.246 "raid_level": "raid0", 00:12:30.246 "superblock": true, 00:12:30.246 "num_base_bdevs": 3, 00:12:30.246 "num_base_bdevs_discovered": 3, 00:12:30.246 "num_base_bdevs_operational": 3, 00:12:30.246 "base_bdevs_list": [ 00:12:30.246 { 00:12:30.246 "name": "BaseBdev1", 00:12:30.246 "uuid": "88e49f09-c0fa-5c06-9331-82caeb515a8e", 00:12:30.246 "is_configured": true, 00:12:30.246 "data_offset": 2048, 00:12:30.246 "data_size": 63488 00:12:30.246 }, 00:12:30.246 { 00:12:30.246 "name": "BaseBdev2", 00:12:30.246 "uuid": "22c676a9-72a3-5469-9c36-e9c876a75f68", 00:12:30.246 "is_configured": true, 00:12:30.246 "data_offset": 2048, 00:12:30.246 "data_size": 63488 00:12:30.246 }, 00:12:30.246 { 00:12:30.246 "name": "BaseBdev3", 00:12:30.246 "uuid": "07b9c41a-b6e8-5223-9cec-0a617688d163", 00:12:30.246 "is_configured": true, 00:12:30.246 "data_offset": 2048, 00:12:30.246 "data_size": 63488 00:12:30.246 } 00:12:30.246 ] 00:12:30.246 }' 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.246 07:09:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.812 07:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:30.812 07:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:31.071 [2024-11-20 07:09:28.185308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.005 "name": "raid_bdev1", 00:12:32.005 "uuid": "45df67a4-35c8-4bc3-9038-d2d833253274", 00:12:32.005 "strip_size_kb": 64, 00:12:32.005 "state": "online", 00:12:32.005 "raid_level": "raid0", 00:12:32.005 "superblock": true, 00:12:32.005 "num_base_bdevs": 3, 00:12:32.005 "num_base_bdevs_discovered": 3, 00:12:32.005 "num_base_bdevs_operational": 3, 00:12:32.005 "base_bdevs_list": [ 00:12:32.005 { 00:12:32.005 "name": "BaseBdev1", 00:12:32.005 "uuid": "88e49f09-c0fa-5c06-9331-82caeb515a8e", 00:12:32.005 "is_configured": true, 00:12:32.005 "data_offset": 2048, 00:12:32.005 "data_size": 63488 00:12:32.005 }, 00:12:32.005 { 00:12:32.005 "name": "BaseBdev2", 00:12:32.005 "uuid": "22c676a9-72a3-5469-9c36-e9c876a75f68", 00:12:32.005 "is_configured": true, 00:12:32.005 "data_offset": 2048, 00:12:32.005 "data_size": 63488 00:12:32.005 }, 00:12:32.005 { 00:12:32.005 "name": "BaseBdev3", 00:12:32.005 "uuid": "07b9c41a-b6e8-5223-9cec-0a617688d163", 00:12:32.005 "is_configured": true, 00:12:32.005 "data_offset": 2048, 00:12:32.005 "data_size": 63488 00:12:32.005 } 00:12:32.005 ] 00:12:32.005 }' 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.005 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.262 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:32.262 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.262 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.262 [2024-11-20 07:09:29.571614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:32.262 [2024-11-20 07:09:29.571798] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:32.262 [2024-11-20 07:09:29.575282] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:32.262 [2024-11-20 07:09:29.575465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.263 [2024-11-20 07:09:29.575569] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:32.263 [2024-11-20 07:09:29.575773] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:32.263 { 00:12:32.263 "results": [ 00:12:32.263 { 00:12:32.263 "job": "raid_bdev1", 00:12:32.263 "core_mask": "0x1", 00:12:32.263 "workload": "randrw", 00:12:32.263 "percentage": 50, 00:12:32.263 "status": "finished", 00:12:32.263 "queue_depth": 1, 00:12:32.263 "io_size": 131072, 00:12:32.263 "runtime": 1.38408, 00:12:32.263 "iops": 10432.922952430496, 00:12:32.263 "mibps": 1304.115369053812, 00:12:32.263 "io_failed": 1, 00:12:32.263 "io_timeout": 0, 00:12:32.263 "avg_latency_us": 133.57024935316744, 00:12:32.263 "min_latency_us": 43.28727272727273, 00:12:32.263 "max_latency_us": 1809.6872727272728 00:12:32.263 } 00:12:32.263 ], 00:12:32.263 "core_count": 1 00:12:32.263 } 00:12:32.263 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.263 07:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65408 00:12:32.263 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65408 ']' 00:12:32.263 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65408 00:12:32.263 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:32.520 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:32.520 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65408 00:12:32.520 killing process with pid 65408 00:12:32.520 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:32.520 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:32.520 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65408' 00:12:32.520 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65408 00:12:32.520 [2024-11-20 07:09:29.612118] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:32.520 07:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65408 00:12:32.520 [2024-11-20 07:09:29.820905] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:33.893 07:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:33.893 07:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BU2r0EJwOL 00:12:33.893 07:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:33.893 07:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:12:33.893 07:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:33.893 07:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:33.893 07:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:33.893 07:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:12:33.893 00:12:33.893 real 0m4.668s 00:12:33.893 user 0m5.755s 00:12:33.893 sys 0m0.601s 00:12:33.893 07:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.893 07:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.893 ************************************ 00:12:33.893 END TEST raid_write_error_test 00:12:33.893 ************************************ 00:12:33.893 07:09:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:33.893 07:09:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:12:33.894 07:09:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:33.894 07:09:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.894 07:09:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:33.894 ************************************ 00:12:33.894 START TEST raid_state_function_test 00:12:33.894 ************************************ 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:33.894 Process raid pid: 65550 00:12:33.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65550 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65550' 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65550 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65550 ']' 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.894 07:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.894 [2024-11-20 07:09:31.065666] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:12:33.894 [2024-11-20 07:09:31.066072] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.151 [2024-11-20 07:09:31.255465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.151 [2024-11-20 07:09:31.410648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.409 [2024-11-20 07:09:31.616754] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.409 [2024-11-20 07:09:31.617030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.973 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.973 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:34.973 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:34.973 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.973 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.974 [2024-11-20 07:09:32.026812] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:34.974 [2024-11-20 07:09:32.027026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:34.974 [2024-11-20 07:09:32.027056] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:34.974 [2024-11-20 07:09:32.027075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:34.974 [2024-11-20 07:09:32.027085] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:34.974 [2024-11-20 07:09:32.027100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.974 "name": "Existed_Raid", 00:12:34.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.974 "strip_size_kb": 64, 00:12:34.974 "state": "configuring", 00:12:34.974 "raid_level": "concat", 00:12:34.974 "superblock": false, 00:12:34.974 "num_base_bdevs": 3, 00:12:34.974 "num_base_bdevs_discovered": 0, 00:12:34.974 "num_base_bdevs_operational": 3, 00:12:34.974 "base_bdevs_list": [ 00:12:34.974 { 00:12:34.974 "name": "BaseBdev1", 00:12:34.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.974 "is_configured": false, 00:12:34.974 "data_offset": 0, 00:12:34.974 "data_size": 0 00:12:34.974 }, 00:12:34.974 { 00:12:34.974 "name": "BaseBdev2", 00:12:34.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.974 "is_configured": false, 00:12:34.974 "data_offset": 0, 00:12:34.974 "data_size": 0 00:12:34.974 }, 00:12:34.974 { 00:12:34.974 "name": "BaseBdev3", 00:12:34.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.974 "is_configured": false, 00:12:34.974 "data_offset": 0, 00:12:34.974 "data_size": 0 00:12:34.974 } 00:12:34.974 ] 00:12:34.974 }' 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.974 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:35.232 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.232 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 [2024-11-20 07:09:32.526906] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:35.232 [2024-11-20 07:09:32.527089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:35.232 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.232 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:35.232 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.232 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.232 [2024-11-20 07:09:32.534860] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:35.232 [2024-11-20 07:09:32.535049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:35.232 [2024-11-20 07:09:32.535171] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:35.232 [2024-11-20 07:09:32.535325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:35.232 [2024-11-20 07:09:32.535465] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:35.232 [2024-11-20 07:09:32.535528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:35.232 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.232 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:35.232 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.232 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.490 [2024-11-20 07:09:32.579233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.490 BaseBdev1 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.490 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.490 [ 00:12:35.490 { 00:12:35.490 "name": "BaseBdev1", 00:12:35.490 "aliases": [ 00:12:35.490 "18b9bfd8-70b9-40e1-a116-714a1760bc62" 00:12:35.490 ], 00:12:35.490 "product_name": "Malloc disk", 00:12:35.490 "block_size": 512, 00:12:35.490 "num_blocks": 65536, 00:12:35.490 "uuid": "18b9bfd8-70b9-40e1-a116-714a1760bc62", 00:12:35.491 "assigned_rate_limits": { 00:12:35.491 "rw_ios_per_sec": 0, 00:12:35.491 "rw_mbytes_per_sec": 0, 00:12:35.491 "r_mbytes_per_sec": 0, 00:12:35.491 "w_mbytes_per_sec": 0 00:12:35.491 }, 00:12:35.491 "claimed": true, 00:12:35.491 "claim_type": "exclusive_write", 00:12:35.491 "zoned": false, 00:12:35.491 "supported_io_types": { 00:12:35.491 "read": true, 00:12:35.491 "write": true, 00:12:35.491 "unmap": true, 00:12:35.491 "flush": true, 00:12:35.491 "reset": true, 00:12:35.491 "nvme_admin": false, 00:12:35.491 "nvme_io": false, 00:12:35.491 "nvme_io_md": false, 00:12:35.491 "write_zeroes": true, 00:12:35.491 "zcopy": true, 00:12:35.491 "get_zone_info": false, 00:12:35.491 "zone_management": false, 00:12:35.491 "zone_append": false, 00:12:35.491 "compare": false, 00:12:35.491 "compare_and_write": false, 00:12:35.491 "abort": true, 00:12:35.491 "seek_hole": false, 00:12:35.491 "seek_data": false, 00:12:35.491 "copy": true, 00:12:35.491 "nvme_iov_md": false 00:12:35.491 }, 00:12:35.491 "memory_domains": [ 00:12:35.491 { 00:12:35.491 "dma_device_id": "system", 00:12:35.491 "dma_device_type": 1 00:12:35.491 }, 00:12:35.491 { 00:12:35.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.491 "dma_device_type": 2 00:12:35.491 } 00:12:35.491 ], 00:12:35.491 "driver_specific": {} 00:12:35.491 } 00:12:35.491 ] 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.491 "name": "Existed_Raid", 00:12:35.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.491 "strip_size_kb": 64, 00:12:35.491 "state": "configuring", 00:12:35.491 "raid_level": "concat", 00:12:35.491 "superblock": false, 00:12:35.491 "num_base_bdevs": 3, 00:12:35.491 "num_base_bdevs_discovered": 1, 00:12:35.491 "num_base_bdevs_operational": 3, 00:12:35.491 "base_bdevs_list": [ 00:12:35.491 { 00:12:35.491 "name": "BaseBdev1", 00:12:35.491 "uuid": "18b9bfd8-70b9-40e1-a116-714a1760bc62", 00:12:35.491 "is_configured": true, 00:12:35.491 "data_offset": 0, 00:12:35.491 "data_size": 65536 00:12:35.491 }, 00:12:35.491 { 00:12:35.491 "name": "BaseBdev2", 00:12:35.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.491 "is_configured": false, 00:12:35.491 "data_offset": 0, 00:12:35.491 "data_size": 0 00:12:35.491 }, 00:12:35.491 { 00:12:35.491 "name": "BaseBdev3", 00:12:35.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.491 "is_configured": false, 00:12:35.491 "data_offset": 0, 00:12:35.491 "data_size": 0 00:12:35.491 } 00:12:35.491 ] 00:12:35.491 }' 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.491 07:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.057 [2024-11-20 07:09:33.139433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:36.057 [2024-11-20 07:09:33.139636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.057 [2024-11-20 07:09:33.147473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.057 [2024-11-20 07:09:33.149859] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:36.057 [2024-11-20 07:09:33.150051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:36.057 [2024-11-20 07:09:33.150186] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:36.057 [2024-11-20 07:09:33.150248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.057 "name": "Existed_Raid", 00:12:36.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.057 "strip_size_kb": 64, 00:12:36.057 "state": "configuring", 00:12:36.057 "raid_level": "concat", 00:12:36.057 "superblock": false, 00:12:36.057 "num_base_bdevs": 3, 00:12:36.057 "num_base_bdevs_discovered": 1, 00:12:36.057 "num_base_bdevs_operational": 3, 00:12:36.057 "base_bdevs_list": [ 00:12:36.057 { 00:12:36.057 "name": "BaseBdev1", 00:12:36.057 "uuid": "18b9bfd8-70b9-40e1-a116-714a1760bc62", 00:12:36.057 "is_configured": true, 00:12:36.057 "data_offset": 0, 00:12:36.057 "data_size": 65536 00:12:36.057 }, 00:12:36.057 { 00:12:36.057 "name": "BaseBdev2", 00:12:36.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.057 "is_configured": false, 00:12:36.057 "data_offset": 0, 00:12:36.057 "data_size": 0 00:12:36.057 }, 00:12:36.057 { 00:12:36.057 "name": "BaseBdev3", 00:12:36.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.057 "is_configured": false, 00:12:36.057 "data_offset": 0, 00:12:36.057 "data_size": 0 00:12:36.057 } 00:12:36.057 ] 00:12:36.057 }' 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.057 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.315 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:36.315 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.315 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.579 [2024-11-20 07:09:33.654746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.579 BaseBdev2 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.579 [ 00:12:36.579 { 00:12:36.579 "name": "BaseBdev2", 00:12:36.579 "aliases": [ 00:12:36.579 "c309c31f-e088-40d9-a3b2-4519984650e1" 00:12:36.579 ], 00:12:36.579 "product_name": "Malloc disk", 00:12:36.579 "block_size": 512, 00:12:36.579 "num_blocks": 65536, 00:12:36.579 "uuid": "c309c31f-e088-40d9-a3b2-4519984650e1", 00:12:36.579 "assigned_rate_limits": { 00:12:36.579 "rw_ios_per_sec": 0, 00:12:36.579 "rw_mbytes_per_sec": 0, 00:12:36.579 "r_mbytes_per_sec": 0, 00:12:36.579 "w_mbytes_per_sec": 0 00:12:36.579 }, 00:12:36.579 "claimed": true, 00:12:36.579 "claim_type": "exclusive_write", 00:12:36.579 "zoned": false, 00:12:36.579 "supported_io_types": { 00:12:36.579 "read": true, 00:12:36.579 "write": true, 00:12:36.579 "unmap": true, 00:12:36.579 "flush": true, 00:12:36.579 "reset": true, 00:12:36.579 "nvme_admin": false, 00:12:36.579 "nvme_io": false, 00:12:36.579 "nvme_io_md": false, 00:12:36.579 "write_zeroes": true, 00:12:36.579 "zcopy": true, 00:12:36.579 "get_zone_info": false, 00:12:36.579 "zone_management": false, 00:12:36.579 "zone_append": false, 00:12:36.579 "compare": false, 00:12:36.579 "compare_and_write": false, 00:12:36.579 "abort": true, 00:12:36.579 "seek_hole": false, 00:12:36.579 "seek_data": false, 00:12:36.579 "copy": true, 00:12:36.579 "nvme_iov_md": false 00:12:36.579 }, 00:12:36.579 "memory_domains": [ 00:12:36.579 { 00:12:36.579 "dma_device_id": "system", 00:12:36.579 "dma_device_type": 1 00:12:36.579 }, 00:12:36.579 { 00:12:36.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.579 "dma_device_type": 2 00:12:36.579 } 00:12:36.579 ], 00:12:36.579 "driver_specific": {} 00:12:36.579 } 00:12:36.579 ] 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.579 "name": "Existed_Raid", 00:12:36.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.579 "strip_size_kb": 64, 00:12:36.579 "state": "configuring", 00:12:36.579 "raid_level": "concat", 00:12:36.579 "superblock": false, 00:12:36.579 "num_base_bdevs": 3, 00:12:36.579 "num_base_bdevs_discovered": 2, 00:12:36.579 "num_base_bdevs_operational": 3, 00:12:36.579 "base_bdevs_list": [ 00:12:36.579 { 00:12:36.579 "name": "BaseBdev1", 00:12:36.579 "uuid": "18b9bfd8-70b9-40e1-a116-714a1760bc62", 00:12:36.579 "is_configured": true, 00:12:36.579 "data_offset": 0, 00:12:36.579 "data_size": 65536 00:12:36.579 }, 00:12:36.579 { 00:12:36.579 "name": "BaseBdev2", 00:12:36.579 "uuid": "c309c31f-e088-40d9-a3b2-4519984650e1", 00:12:36.579 "is_configured": true, 00:12:36.579 "data_offset": 0, 00:12:36.579 "data_size": 65536 00:12:36.579 }, 00:12:36.579 { 00:12:36.579 "name": "BaseBdev3", 00:12:36.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.579 "is_configured": false, 00:12:36.579 "data_offset": 0, 00:12:36.579 "data_size": 0 00:12:36.579 } 00:12:36.579 ] 00:12:36.579 }' 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.579 07:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.858 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:36.858 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.858 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.116 [2024-11-20 07:09:34.216891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.116 [2024-11-20 07:09:34.217120] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:37.116 [2024-11-20 07:09:34.217156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:37.116 [2024-11-20 07:09:34.217513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:37.116 [2024-11-20 07:09:34.217739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:37.116 [2024-11-20 07:09:34.217757] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:37.116 [2024-11-20 07:09:34.218104] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.116 BaseBdev3 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.116 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.116 [ 00:12:37.116 { 00:12:37.116 "name": "BaseBdev3", 00:12:37.116 "aliases": [ 00:12:37.116 "aa4732c0-b319-4c1f-849b-f23caf61b1ab" 00:12:37.116 ], 00:12:37.116 "product_name": "Malloc disk", 00:12:37.116 "block_size": 512, 00:12:37.116 "num_blocks": 65536, 00:12:37.116 "uuid": "aa4732c0-b319-4c1f-849b-f23caf61b1ab", 00:12:37.116 "assigned_rate_limits": { 00:12:37.116 "rw_ios_per_sec": 0, 00:12:37.116 "rw_mbytes_per_sec": 0, 00:12:37.116 "r_mbytes_per_sec": 0, 00:12:37.116 "w_mbytes_per_sec": 0 00:12:37.116 }, 00:12:37.116 "claimed": true, 00:12:37.116 "claim_type": "exclusive_write", 00:12:37.116 "zoned": false, 00:12:37.116 "supported_io_types": { 00:12:37.116 "read": true, 00:12:37.116 "write": true, 00:12:37.116 "unmap": true, 00:12:37.116 "flush": true, 00:12:37.117 "reset": true, 00:12:37.117 "nvme_admin": false, 00:12:37.117 "nvme_io": false, 00:12:37.117 "nvme_io_md": false, 00:12:37.117 "write_zeroes": true, 00:12:37.117 "zcopy": true, 00:12:37.117 "get_zone_info": false, 00:12:37.117 "zone_management": false, 00:12:37.117 "zone_append": false, 00:12:37.117 "compare": false, 00:12:37.117 "compare_and_write": false, 00:12:37.117 "abort": true, 00:12:37.117 "seek_hole": false, 00:12:37.117 "seek_data": false, 00:12:37.117 "copy": true, 00:12:37.117 "nvme_iov_md": false 00:12:37.117 }, 00:12:37.117 "memory_domains": [ 00:12:37.117 { 00:12:37.117 "dma_device_id": "system", 00:12:37.117 "dma_device_type": 1 00:12:37.117 }, 00:12:37.117 { 00:12:37.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.117 "dma_device_type": 2 00:12:37.117 } 00:12:37.117 ], 00:12:37.117 "driver_specific": {} 00:12:37.117 } 00:12:37.117 ] 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.117 "name": "Existed_Raid", 00:12:37.117 "uuid": "6b4f0db9-8fb4-40f7-b1a4-7b9643267833", 00:12:37.117 "strip_size_kb": 64, 00:12:37.117 "state": "online", 00:12:37.117 "raid_level": "concat", 00:12:37.117 "superblock": false, 00:12:37.117 "num_base_bdevs": 3, 00:12:37.117 "num_base_bdevs_discovered": 3, 00:12:37.117 "num_base_bdevs_operational": 3, 00:12:37.117 "base_bdevs_list": [ 00:12:37.117 { 00:12:37.117 "name": "BaseBdev1", 00:12:37.117 "uuid": "18b9bfd8-70b9-40e1-a116-714a1760bc62", 00:12:37.117 "is_configured": true, 00:12:37.117 "data_offset": 0, 00:12:37.117 "data_size": 65536 00:12:37.117 }, 00:12:37.117 { 00:12:37.117 "name": "BaseBdev2", 00:12:37.117 "uuid": "c309c31f-e088-40d9-a3b2-4519984650e1", 00:12:37.117 "is_configured": true, 00:12:37.117 "data_offset": 0, 00:12:37.117 "data_size": 65536 00:12:37.117 }, 00:12:37.117 { 00:12:37.117 "name": "BaseBdev3", 00:12:37.117 "uuid": "aa4732c0-b319-4c1f-849b-f23caf61b1ab", 00:12:37.117 "is_configured": true, 00:12:37.117 "data_offset": 0, 00:12:37.117 "data_size": 65536 00:12:37.117 } 00:12:37.117 ] 00:12:37.117 }' 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.117 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 [2024-11-20 07:09:34.749463] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:37.683 "name": "Existed_Raid", 00:12:37.683 "aliases": [ 00:12:37.683 "6b4f0db9-8fb4-40f7-b1a4-7b9643267833" 00:12:37.683 ], 00:12:37.683 "product_name": "Raid Volume", 00:12:37.683 "block_size": 512, 00:12:37.683 "num_blocks": 196608, 00:12:37.683 "uuid": "6b4f0db9-8fb4-40f7-b1a4-7b9643267833", 00:12:37.683 "assigned_rate_limits": { 00:12:37.683 "rw_ios_per_sec": 0, 00:12:37.683 "rw_mbytes_per_sec": 0, 00:12:37.683 "r_mbytes_per_sec": 0, 00:12:37.683 "w_mbytes_per_sec": 0 00:12:37.683 }, 00:12:37.683 "claimed": false, 00:12:37.683 "zoned": false, 00:12:37.683 "supported_io_types": { 00:12:37.683 "read": true, 00:12:37.683 "write": true, 00:12:37.683 "unmap": true, 00:12:37.683 "flush": true, 00:12:37.683 "reset": true, 00:12:37.683 "nvme_admin": false, 00:12:37.683 "nvme_io": false, 00:12:37.683 "nvme_io_md": false, 00:12:37.683 "write_zeroes": true, 00:12:37.683 "zcopy": false, 00:12:37.683 "get_zone_info": false, 00:12:37.683 "zone_management": false, 00:12:37.683 "zone_append": false, 00:12:37.683 "compare": false, 00:12:37.683 "compare_and_write": false, 00:12:37.683 "abort": false, 00:12:37.683 "seek_hole": false, 00:12:37.683 "seek_data": false, 00:12:37.683 "copy": false, 00:12:37.683 "nvme_iov_md": false 00:12:37.683 }, 00:12:37.683 "memory_domains": [ 00:12:37.683 { 00:12:37.683 "dma_device_id": "system", 00:12:37.683 "dma_device_type": 1 00:12:37.683 }, 00:12:37.683 { 00:12:37.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.683 "dma_device_type": 2 00:12:37.683 }, 00:12:37.683 { 00:12:37.683 "dma_device_id": "system", 00:12:37.683 "dma_device_type": 1 00:12:37.683 }, 00:12:37.683 { 00:12:37.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.683 "dma_device_type": 2 00:12:37.683 }, 00:12:37.683 { 00:12:37.683 "dma_device_id": "system", 00:12:37.683 "dma_device_type": 1 00:12:37.683 }, 00:12:37.683 { 00:12:37.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.683 "dma_device_type": 2 00:12:37.683 } 00:12:37.683 ], 00:12:37.683 "driver_specific": { 00:12:37.683 "raid": { 00:12:37.683 "uuid": "6b4f0db9-8fb4-40f7-b1a4-7b9643267833", 00:12:37.683 "strip_size_kb": 64, 00:12:37.683 "state": "online", 00:12:37.683 "raid_level": "concat", 00:12:37.683 "superblock": false, 00:12:37.683 "num_base_bdevs": 3, 00:12:37.683 "num_base_bdevs_discovered": 3, 00:12:37.683 "num_base_bdevs_operational": 3, 00:12:37.683 "base_bdevs_list": [ 00:12:37.683 { 00:12:37.683 "name": "BaseBdev1", 00:12:37.683 "uuid": "18b9bfd8-70b9-40e1-a116-714a1760bc62", 00:12:37.683 "is_configured": true, 00:12:37.683 "data_offset": 0, 00:12:37.683 "data_size": 65536 00:12:37.683 }, 00:12:37.683 { 00:12:37.683 "name": "BaseBdev2", 00:12:37.683 "uuid": "c309c31f-e088-40d9-a3b2-4519984650e1", 00:12:37.683 "is_configured": true, 00:12:37.683 "data_offset": 0, 00:12:37.683 "data_size": 65536 00:12:37.683 }, 00:12:37.683 { 00:12:37.683 "name": "BaseBdev3", 00:12:37.683 "uuid": "aa4732c0-b319-4c1f-849b-f23caf61b1ab", 00:12:37.683 "is_configured": true, 00:12:37.683 "data_offset": 0, 00:12:37.683 "data_size": 65536 00:12:37.683 } 00:12:37.683 ] 00:12:37.683 } 00:12:37.683 } 00:12:37.683 }' 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:37.683 BaseBdev2 00:12:37.683 BaseBdev3' 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.683 07:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.941 [2024-11-20 07:09:35.077255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:37.941 [2024-11-20 07:09:35.077480] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.941 [2024-11-20 07:09:35.077718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:37.941 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.942 "name": "Existed_Raid", 00:12:37.942 "uuid": "6b4f0db9-8fb4-40f7-b1a4-7b9643267833", 00:12:37.942 "strip_size_kb": 64, 00:12:37.942 "state": "offline", 00:12:37.942 "raid_level": "concat", 00:12:37.942 "superblock": false, 00:12:37.942 "num_base_bdevs": 3, 00:12:37.942 "num_base_bdevs_discovered": 2, 00:12:37.942 "num_base_bdevs_operational": 2, 00:12:37.942 "base_bdevs_list": [ 00:12:37.942 { 00:12:37.942 "name": null, 00:12:37.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.942 "is_configured": false, 00:12:37.942 "data_offset": 0, 00:12:37.942 "data_size": 65536 00:12:37.942 }, 00:12:37.942 { 00:12:37.942 "name": "BaseBdev2", 00:12:37.942 "uuid": "c309c31f-e088-40d9-a3b2-4519984650e1", 00:12:37.942 "is_configured": true, 00:12:37.942 "data_offset": 0, 00:12:37.942 "data_size": 65536 00:12:37.942 }, 00:12:37.942 { 00:12:37.942 "name": "BaseBdev3", 00:12:37.942 "uuid": "aa4732c0-b319-4c1f-849b-f23caf61b1ab", 00:12:37.942 "is_configured": true, 00:12:37.942 "data_offset": 0, 00:12:37.942 "data_size": 65536 00:12:37.942 } 00:12:37.942 ] 00:12:37.942 }' 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.942 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.541 [2024-11-20 07:09:35.729278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:38.541 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.800 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:38.800 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:38.800 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:38.800 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.800 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.800 [2024-11-20 07:09:35.885861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:38.800 [2024-11-20 07:09:35.886143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:38.800 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.800 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:38.800 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:38.800 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.800 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.800 07:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.800 07:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.800 BaseBdev2 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.800 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.059 [ 00:12:39.059 { 00:12:39.059 "name": "BaseBdev2", 00:12:39.059 "aliases": [ 00:12:39.059 "fdd32e8a-ffc1-411c-b154-50ca7781ab73" 00:12:39.059 ], 00:12:39.059 "product_name": "Malloc disk", 00:12:39.059 "block_size": 512, 00:12:39.059 "num_blocks": 65536, 00:12:39.059 "uuid": "fdd32e8a-ffc1-411c-b154-50ca7781ab73", 00:12:39.059 "assigned_rate_limits": { 00:12:39.059 "rw_ios_per_sec": 0, 00:12:39.059 "rw_mbytes_per_sec": 0, 00:12:39.059 "r_mbytes_per_sec": 0, 00:12:39.059 "w_mbytes_per_sec": 0 00:12:39.059 }, 00:12:39.059 "claimed": false, 00:12:39.059 "zoned": false, 00:12:39.059 "supported_io_types": { 00:12:39.059 "read": true, 00:12:39.059 "write": true, 00:12:39.059 "unmap": true, 00:12:39.059 "flush": true, 00:12:39.059 "reset": true, 00:12:39.059 "nvme_admin": false, 00:12:39.059 "nvme_io": false, 00:12:39.059 "nvme_io_md": false, 00:12:39.059 "write_zeroes": true, 00:12:39.059 "zcopy": true, 00:12:39.059 "get_zone_info": false, 00:12:39.059 "zone_management": false, 00:12:39.059 "zone_append": false, 00:12:39.059 "compare": false, 00:12:39.059 "compare_and_write": false, 00:12:39.059 "abort": true, 00:12:39.059 "seek_hole": false, 00:12:39.059 "seek_data": false, 00:12:39.059 "copy": true, 00:12:39.059 "nvme_iov_md": false 00:12:39.059 }, 00:12:39.059 "memory_domains": [ 00:12:39.059 { 00:12:39.059 "dma_device_id": "system", 00:12:39.059 "dma_device_type": 1 00:12:39.059 }, 00:12:39.059 { 00:12:39.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.059 "dma_device_type": 2 00:12:39.059 } 00:12:39.059 ], 00:12:39.059 "driver_specific": {} 00:12:39.059 } 00:12:39.059 ] 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.059 BaseBdev3 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.059 [ 00:12:39.059 { 00:12:39.059 "name": "BaseBdev3", 00:12:39.059 "aliases": [ 00:12:39.059 "66121856-c7c1-4cb8-b59e-eefa54554b20" 00:12:39.059 ], 00:12:39.059 "product_name": "Malloc disk", 00:12:39.059 "block_size": 512, 00:12:39.059 "num_blocks": 65536, 00:12:39.059 "uuid": "66121856-c7c1-4cb8-b59e-eefa54554b20", 00:12:39.059 "assigned_rate_limits": { 00:12:39.059 "rw_ios_per_sec": 0, 00:12:39.059 "rw_mbytes_per_sec": 0, 00:12:39.059 "r_mbytes_per_sec": 0, 00:12:39.059 "w_mbytes_per_sec": 0 00:12:39.059 }, 00:12:39.059 "claimed": false, 00:12:39.059 "zoned": false, 00:12:39.059 "supported_io_types": { 00:12:39.059 "read": true, 00:12:39.059 "write": true, 00:12:39.059 "unmap": true, 00:12:39.059 "flush": true, 00:12:39.059 "reset": true, 00:12:39.059 "nvme_admin": false, 00:12:39.059 "nvme_io": false, 00:12:39.059 "nvme_io_md": false, 00:12:39.059 "write_zeroes": true, 00:12:39.059 "zcopy": true, 00:12:39.059 "get_zone_info": false, 00:12:39.059 "zone_management": false, 00:12:39.059 "zone_append": false, 00:12:39.059 "compare": false, 00:12:39.059 "compare_and_write": false, 00:12:39.059 "abort": true, 00:12:39.059 "seek_hole": false, 00:12:39.059 "seek_data": false, 00:12:39.059 "copy": true, 00:12:39.059 "nvme_iov_md": false 00:12:39.059 }, 00:12:39.059 "memory_domains": [ 00:12:39.059 { 00:12:39.059 "dma_device_id": "system", 00:12:39.059 "dma_device_type": 1 00:12:39.059 }, 00:12:39.059 { 00:12:39.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.059 "dma_device_type": 2 00:12:39.059 } 00:12:39.059 ], 00:12:39.059 "driver_specific": {} 00:12:39.059 } 00:12:39.059 ] 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.059 [2024-11-20 07:09:36.214774] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:39.059 [2024-11-20 07:09:36.214989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:39.059 [2024-11-20 07:09:36.215146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.059 [2024-11-20 07:09:36.217618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.059 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.059 "name": "Existed_Raid", 00:12:39.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.059 "strip_size_kb": 64, 00:12:39.059 "state": "configuring", 00:12:39.059 "raid_level": "concat", 00:12:39.059 "superblock": false, 00:12:39.059 "num_base_bdevs": 3, 00:12:39.059 "num_base_bdevs_discovered": 2, 00:12:39.059 "num_base_bdevs_operational": 3, 00:12:39.059 "base_bdevs_list": [ 00:12:39.059 { 00:12:39.059 "name": "BaseBdev1", 00:12:39.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.059 "is_configured": false, 00:12:39.059 "data_offset": 0, 00:12:39.059 "data_size": 0 00:12:39.059 }, 00:12:39.059 { 00:12:39.059 "name": "BaseBdev2", 00:12:39.059 "uuid": "fdd32e8a-ffc1-411c-b154-50ca7781ab73", 00:12:39.060 "is_configured": true, 00:12:39.060 "data_offset": 0, 00:12:39.060 "data_size": 65536 00:12:39.060 }, 00:12:39.060 { 00:12:39.060 "name": "BaseBdev3", 00:12:39.060 "uuid": "66121856-c7c1-4cb8-b59e-eefa54554b20", 00:12:39.060 "is_configured": true, 00:12:39.060 "data_offset": 0, 00:12:39.060 "data_size": 65536 00:12:39.060 } 00:12:39.060 ] 00:12:39.060 }' 00:12:39.060 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.060 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.626 [2024-11-20 07:09:36.766994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.626 "name": "Existed_Raid", 00:12:39.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.626 "strip_size_kb": 64, 00:12:39.626 "state": "configuring", 00:12:39.626 "raid_level": "concat", 00:12:39.626 "superblock": false, 00:12:39.626 "num_base_bdevs": 3, 00:12:39.626 "num_base_bdevs_discovered": 1, 00:12:39.626 "num_base_bdevs_operational": 3, 00:12:39.626 "base_bdevs_list": [ 00:12:39.626 { 00:12:39.626 "name": "BaseBdev1", 00:12:39.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.626 "is_configured": false, 00:12:39.626 "data_offset": 0, 00:12:39.626 "data_size": 0 00:12:39.626 }, 00:12:39.626 { 00:12:39.626 "name": null, 00:12:39.626 "uuid": "fdd32e8a-ffc1-411c-b154-50ca7781ab73", 00:12:39.626 "is_configured": false, 00:12:39.626 "data_offset": 0, 00:12:39.626 "data_size": 65536 00:12:39.626 }, 00:12:39.626 { 00:12:39.626 "name": "BaseBdev3", 00:12:39.626 "uuid": "66121856-c7c1-4cb8-b59e-eefa54554b20", 00:12:39.626 "is_configured": true, 00:12:39.626 "data_offset": 0, 00:12:39.626 "data_size": 65536 00:12:39.626 } 00:12:39.626 ] 00:12:39.626 }' 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.626 07:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.193 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:40.193 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.193 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.193 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.193 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.193 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:40.193 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:40.193 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.193 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.193 [2024-11-20 07:09:37.416552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.193 BaseBdev1 00:12:40.193 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.193 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:40.193 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.194 [ 00:12:40.194 { 00:12:40.194 "name": "BaseBdev1", 00:12:40.194 "aliases": [ 00:12:40.194 "bc8ccb93-a0f0-4525-a0ca-c974c2d2907c" 00:12:40.194 ], 00:12:40.194 "product_name": "Malloc disk", 00:12:40.194 "block_size": 512, 00:12:40.194 "num_blocks": 65536, 00:12:40.194 "uuid": "bc8ccb93-a0f0-4525-a0ca-c974c2d2907c", 00:12:40.194 "assigned_rate_limits": { 00:12:40.194 "rw_ios_per_sec": 0, 00:12:40.194 "rw_mbytes_per_sec": 0, 00:12:40.194 "r_mbytes_per_sec": 0, 00:12:40.194 "w_mbytes_per_sec": 0 00:12:40.194 }, 00:12:40.194 "claimed": true, 00:12:40.194 "claim_type": "exclusive_write", 00:12:40.194 "zoned": false, 00:12:40.194 "supported_io_types": { 00:12:40.194 "read": true, 00:12:40.194 "write": true, 00:12:40.194 "unmap": true, 00:12:40.194 "flush": true, 00:12:40.194 "reset": true, 00:12:40.194 "nvme_admin": false, 00:12:40.194 "nvme_io": false, 00:12:40.194 "nvme_io_md": false, 00:12:40.194 "write_zeroes": true, 00:12:40.194 "zcopy": true, 00:12:40.194 "get_zone_info": false, 00:12:40.194 "zone_management": false, 00:12:40.194 "zone_append": false, 00:12:40.194 "compare": false, 00:12:40.194 "compare_and_write": false, 00:12:40.194 "abort": true, 00:12:40.194 "seek_hole": false, 00:12:40.194 "seek_data": false, 00:12:40.194 "copy": true, 00:12:40.194 "nvme_iov_md": false 00:12:40.194 }, 00:12:40.194 "memory_domains": [ 00:12:40.194 { 00:12:40.194 "dma_device_id": "system", 00:12:40.194 "dma_device_type": 1 00:12:40.194 }, 00:12:40.194 { 00:12:40.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.194 "dma_device_type": 2 00:12:40.194 } 00:12:40.194 ], 00:12:40.194 "driver_specific": {} 00:12:40.194 } 00:12:40.194 ] 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.194 "name": "Existed_Raid", 00:12:40.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.194 "strip_size_kb": 64, 00:12:40.194 "state": "configuring", 00:12:40.194 "raid_level": "concat", 00:12:40.194 "superblock": false, 00:12:40.194 "num_base_bdevs": 3, 00:12:40.194 "num_base_bdevs_discovered": 2, 00:12:40.194 "num_base_bdevs_operational": 3, 00:12:40.194 "base_bdevs_list": [ 00:12:40.194 { 00:12:40.194 "name": "BaseBdev1", 00:12:40.194 "uuid": "bc8ccb93-a0f0-4525-a0ca-c974c2d2907c", 00:12:40.194 "is_configured": true, 00:12:40.194 "data_offset": 0, 00:12:40.194 "data_size": 65536 00:12:40.194 }, 00:12:40.194 { 00:12:40.194 "name": null, 00:12:40.194 "uuid": "fdd32e8a-ffc1-411c-b154-50ca7781ab73", 00:12:40.194 "is_configured": false, 00:12:40.194 "data_offset": 0, 00:12:40.194 "data_size": 65536 00:12:40.194 }, 00:12:40.194 { 00:12:40.194 "name": "BaseBdev3", 00:12:40.194 "uuid": "66121856-c7c1-4cb8-b59e-eefa54554b20", 00:12:40.194 "is_configured": true, 00:12:40.194 "data_offset": 0, 00:12:40.194 "data_size": 65536 00:12:40.194 } 00:12:40.194 ] 00:12:40.194 }' 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.194 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.815 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.815 07:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:40.815 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.815 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.815 07:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.815 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:40.815 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:40.815 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.815 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.815 [2024-11-20 07:09:38.044773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:40.815 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.815 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:40.815 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.815 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.816 "name": "Existed_Raid", 00:12:40.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.816 "strip_size_kb": 64, 00:12:40.816 "state": "configuring", 00:12:40.816 "raid_level": "concat", 00:12:40.816 "superblock": false, 00:12:40.816 "num_base_bdevs": 3, 00:12:40.816 "num_base_bdevs_discovered": 1, 00:12:40.816 "num_base_bdevs_operational": 3, 00:12:40.816 "base_bdevs_list": [ 00:12:40.816 { 00:12:40.816 "name": "BaseBdev1", 00:12:40.816 "uuid": "bc8ccb93-a0f0-4525-a0ca-c974c2d2907c", 00:12:40.816 "is_configured": true, 00:12:40.816 "data_offset": 0, 00:12:40.816 "data_size": 65536 00:12:40.816 }, 00:12:40.816 { 00:12:40.816 "name": null, 00:12:40.816 "uuid": "fdd32e8a-ffc1-411c-b154-50ca7781ab73", 00:12:40.816 "is_configured": false, 00:12:40.816 "data_offset": 0, 00:12:40.816 "data_size": 65536 00:12:40.816 }, 00:12:40.816 { 00:12:40.816 "name": null, 00:12:40.816 "uuid": "66121856-c7c1-4cb8-b59e-eefa54554b20", 00:12:40.816 "is_configured": false, 00:12:40.816 "data_offset": 0, 00:12:40.816 "data_size": 65536 00:12:40.816 } 00:12:40.816 ] 00:12:40.816 }' 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.816 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.381 [2024-11-20 07:09:38.645042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.381 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.382 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.382 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.382 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.382 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.382 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.382 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.382 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.641 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.641 "name": "Existed_Raid", 00:12:41.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.641 "strip_size_kb": 64, 00:12:41.641 "state": "configuring", 00:12:41.641 "raid_level": "concat", 00:12:41.641 "superblock": false, 00:12:41.641 "num_base_bdevs": 3, 00:12:41.641 "num_base_bdevs_discovered": 2, 00:12:41.641 "num_base_bdevs_operational": 3, 00:12:41.641 "base_bdevs_list": [ 00:12:41.641 { 00:12:41.641 "name": "BaseBdev1", 00:12:41.641 "uuid": "bc8ccb93-a0f0-4525-a0ca-c974c2d2907c", 00:12:41.641 "is_configured": true, 00:12:41.641 "data_offset": 0, 00:12:41.641 "data_size": 65536 00:12:41.641 }, 00:12:41.641 { 00:12:41.641 "name": null, 00:12:41.641 "uuid": "fdd32e8a-ffc1-411c-b154-50ca7781ab73", 00:12:41.641 "is_configured": false, 00:12:41.641 "data_offset": 0, 00:12:41.641 "data_size": 65536 00:12:41.641 }, 00:12:41.641 { 00:12:41.641 "name": "BaseBdev3", 00:12:41.641 "uuid": "66121856-c7c1-4cb8-b59e-eefa54554b20", 00:12:41.641 "is_configured": true, 00:12:41.641 "data_offset": 0, 00:12:41.641 "data_size": 65536 00:12:41.641 } 00:12:41.641 ] 00:12:41.641 }' 00:12:41.641 07:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.641 07:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.900 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.900 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.900 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:41.900 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.900 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.158 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:42.158 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:42.158 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.158 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.158 [2024-11-20 07:09:39.249211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:42.158 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.158 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:42.158 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.158 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.158 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:42.158 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.158 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.158 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.159 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.159 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.159 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.159 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.159 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.159 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.159 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.159 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.159 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.159 "name": "Existed_Raid", 00:12:42.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.159 "strip_size_kb": 64, 00:12:42.159 "state": "configuring", 00:12:42.159 "raid_level": "concat", 00:12:42.159 "superblock": false, 00:12:42.159 "num_base_bdevs": 3, 00:12:42.159 "num_base_bdevs_discovered": 1, 00:12:42.159 "num_base_bdevs_operational": 3, 00:12:42.159 "base_bdevs_list": [ 00:12:42.159 { 00:12:42.159 "name": null, 00:12:42.159 "uuid": "bc8ccb93-a0f0-4525-a0ca-c974c2d2907c", 00:12:42.159 "is_configured": false, 00:12:42.159 "data_offset": 0, 00:12:42.159 "data_size": 65536 00:12:42.159 }, 00:12:42.159 { 00:12:42.159 "name": null, 00:12:42.159 "uuid": "fdd32e8a-ffc1-411c-b154-50ca7781ab73", 00:12:42.159 "is_configured": false, 00:12:42.159 "data_offset": 0, 00:12:42.159 "data_size": 65536 00:12:42.159 }, 00:12:42.159 { 00:12:42.159 "name": "BaseBdev3", 00:12:42.159 "uuid": "66121856-c7c1-4cb8-b59e-eefa54554b20", 00:12:42.159 "is_configured": true, 00:12:42.159 "data_offset": 0, 00:12:42.159 "data_size": 65536 00:12:42.159 } 00:12:42.159 ] 00:12:42.159 }' 00:12:42.159 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.159 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.725 [2024-11-20 07:09:39.917792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.725 "name": "Existed_Raid", 00:12:42.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.725 "strip_size_kb": 64, 00:12:42.725 "state": "configuring", 00:12:42.725 "raid_level": "concat", 00:12:42.725 "superblock": false, 00:12:42.725 "num_base_bdevs": 3, 00:12:42.725 "num_base_bdevs_discovered": 2, 00:12:42.725 "num_base_bdevs_operational": 3, 00:12:42.725 "base_bdevs_list": [ 00:12:42.725 { 00:12:42.725 "name": null, 00:12:42.725 "uuid": "bc8ccb93-a0f0-4525-a0ca-c974c2d2907c", 00:12:42.725 "is_configured": false, 00:12:42.725 "data_offset": 0, 00:12:42.725 "data_size": 65536 00:12:42.725 }, 00:12:42.725 { 00:12:42.725 "name": "BaseBdev2", 00:12:42.725 "uuid": "fdd32e8a-ffc1-411c-b154-50ca7781ab73", 00:12:42.725 "is_configured": true, 00:12:42.725 "data_offset": 0, 00:12:42.725 "data_size": 65536 00:12:42.725 }, 00:12:42.725 { 00:12:42.725 "name": "BaseBdev3", 00:12:42.725 "uuid": "66121856-c7c1-4cb8-b59e-eefa54554b20", 00:12:42.725 "is_configured": true, 00:12:42.725 "data_offset": 0, 00:12:42.725 "data_size": 65536 00:12:42.725 } 00:12:42.725 ] 00:12:42.725 }' 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.725 07:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u bc8ccb93-a0f0-4525-a0ca-c974c2d2907c 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.292 [2024-11-20 07:09:40.576147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:43.292 [2024-11-20 07:09:40.576201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:43.292 [2024-11-20 07:09:40.576217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:12:43.292 [2024-11-20 07:09:40.576553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:43.292 [2024-11-20 07:09:40.576749] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:43.292 [2024-11-20 07:09:40.576765] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:43.292 [2024-11-20 07:09:40.577093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.292 NewBaseBdev 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.292 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.292 [ 00:12:43.292 { 00:12:43.292 "name": "NewBaseBdev", 00:12:43.292 "aliases": [ 00:12:43.292 "bc8ccb93-a0f0-4525-a0ca-c974c2d2907c" 00:12:43.292 ], 00:12:43.292 "product_name": "Malloc disk", 00:12:43.292 "block_size": 512, 00:12:43.292 "num_blocks": 65536, 00:12:43.292 "uuid": "bc8ccb93-a0f0-4525-a0ca-c974c2d2907c", 00:12:43.292 "assigned_rate_limits": { 00:12:43.292 "rw_ios_per_sec": 0, 00:12:43.292 "rw_mbytes_per_sec": 0, 00:12:43.292 "r_mbytes_per_sec": 0, 00:12:43.292 "w_mbytes_per_sec": 0 00:12:43.292 }, 00:12:43.292 "claimed": true, 00:12:43.292 "claim_type": "exclusive_write", 00:12:43.292 "zoned": false, 00:12:43.292 "supported_io_types": { 00:12:43.292 "read": true, 00:12:43.292 "write": true, 00:12:43.292 "unmap": true, 00:12:43.292 "flush": true, 00:12:43.292 "reset": true, 00:12:43.292 "nvme_admin": false, 00:12:43.292 "nvme_io": false, 00:12:43.292 "nvme_io_md": false, 00:12:43.292 "write_zeroes": true, 00:12:43.292 "zcopy": true, 00:12:43.292 "get_zone_info": false, 00:12:43.292 "zone_management": false, 00:12:43.292 "zone_append": false, 00:12:43.292 "compare": false, 00:12:43.292 "compare_and_write": false, 00:12:43.292 "abort": true, 00:12:43.551 "seek_hole": false, 00:12:43.551 "seek_data": false, 00:12:43.551 "copy": true, 00:12:43.551 "nvme_iov_md": false 00:12:43.551 }, 00:12:43.551 "memory_domains": [ 00:12:43.551 { 00:12:43.551 "dma_device_id": "system", 00:12:43.551 "dma_device_type": 1 00:12:43.551 }, 00:12:43.551 { 00:12:43.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.551 "dma_device_type": 2 00:12:43.551 } 00:12:43.551 ], 00:12:43.551 "driver_specific": {} 00:12:43.551 } 00:12:43.551 ] 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.551 "name": "Existed_Raid", 00:12:43.551 "uuid": "7c767965-1e47-407a-970e-e4d3b2a1f50e", 00:12:43.551 "strip_size_kb": 64, 00:12:43.551 "state": "online", 00:12:43.551 "raid_level": "concat", 00:12:43.551 "superblock": false, 00:12:43.551 "num_base_bdevs": 3, 00:12:43.551 "num_base_bdevs_discovered": 3, 00:12:43.551 "num_base_bdevs_operational": 3, 00:12:43.551 "base_bdevs_list": [ 00:12:43.551 { 00:12:43.551 "name": "NewBaseBdev", 00:12:43.551 "uuid": "bc8ccb93-a0f0-4525-a0ca-c974c2d2907c", 00:12:43.551 "is_configured": true, 00:12:43.551 "data_offset": 0, 00:12:43.551 "data_size": 65536 00:12:43.551 }, 00:12:43.551 { 00:12:43.551 "name": "BaseBdev2", 00:12:43.551 "uuid": "fdd32e8a-ffc1-411c-b154-50ca7781ab73", 00:12:43.551 "is_configured": true, 00:12:43.551 "data_offset": 0, 00:12:43.551 "data_size": 65536 00:12:43.551 }, 00:12:43.551 { 00:12:43.551 "name": "BaseBdev3", 00:12:43.551 "uuid": "66121856-c7c1-4cb8-b59e-eefa54554b20", 00:12:43.551 "is_configured": true, 00:12:43.551 "data_offset": 0, 00:12:43.551 "data_size": 65536 00:12:43.551 } 00:12:43.551 ] 00:12:43.551 }' 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.551 07:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.119 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:44.119 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:44.119 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:44.119 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:44.119 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:44.119 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:44.119 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:44.119 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:44.119 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.119 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.119 [2024-11-20 07:09:41.140818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:44.119 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.119 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:44.119 "name": "Existed_Raid", 00:12:44.119 "aliases": [ 00:12:44.119 "7c767965-1e47-407a-970e-e4d3b2a1f50e" 00:12:44.119 ], 00:12:44.119 "product_name": "Raid Volume", 00:12:44.119 "block_size": 512, 00:12:44.119 "num_blocks": 196608, 00:12:44.119 "uuid": "7c767965-1e47-407a-970e-e4d3b2a1f50e", 00:12:44.119 "assigned_rate_limits": { 00:12:44.119 "rw_ios_per_sec": 0, 00:12:44.119 "rw_mbytes_per_sec": 0, 00:12:44.119 "r_mbytes_per_sec": 0, 00:12:44.119 "w_mbytes_per_sec": 0 00:12:44.119 }, 00:12:44.120 "claimed": false, 00:12:44.120 "zoned": false, 00:12:44.120 "supported_io_types": { 00:12:44.120 "read": true, 00:12:44.120 "write": true, 00:12:44.120 "unmap": true, 00:12:44.120 "flush": true, 00:12:44.120 "reset": true, 00:12:44.120 "nvme_admin": false, 00:12:44.120 "nvme_io": false, 00:12:44.120 "nvme_io_md": false, 00:12:44.120 "write_zeroes": true, 00:12:44.120 "zcopy": false, 00:12:44.120 "get_zone_info": false, 00:12:44.120 "zone_management": false, 00:12:44.120 "zone_append": false, 00:12:44.120 "compare": false, 00:12:44.120 "compare_and_write": false, 00:12:44.120 "abort": false, 00:12:44.120 "seek_hole": false, 00:12:44.120 "seek_data": false, 00:12:44.120 "copy": false, 00:12:44.120 "nvme_iov_md": false 00:12:44.120 }, 00:12:44.120 "memory_domains": [ 00:12:44.120 { 00:12:44.120 "dma_device_id": "system", 00:12:44.120 "dma_device_type": 1 00:12:44.120 }, 00:12:44.120 { 00:12:44.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.120 "dma_device_type": 2 00:12:44.120 }, 00:12:44.120 { 00:12:44.120 "dma_device_id": "system", 00:12:44.120 "dma_device_type": 1 00:12:44.120 }, 00:12:44.120 { 00:12:44.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.120 "dma_device_type": 2 00:12:44.120 }, 00:12:44.120 { 00:12:44.120 "dma_device_id": "system", 00:12:44.120 "dma_device_type": 1 00:12:44.120 }, 00:12:44.120 { 00:12:44.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.120 "dma_device_type": 2 00:12:44.120 } 00:12:44.120 ], 00:12:44.120 "driver_specific": { 00:12:44.120 "raid": { 00:12:44.120 "uuid": "7c767965-1e47-407a-970e-e4d3b2a1f50e", 00:12:44.120 "strip_size_kb": 64, 00:12:44.120 "state": "online", 00:12:44.120 "raid_level": "concat", 00:12:44.120 "superblock": false, 00:12:44.120 "num_base_bdevs": 3, 00:12:44.120 "num_base_bdevs_discovered": 3, 00:12:44.120 "num_base_bdevs_operational": 3, 00:12:44.120 "base_bdevs_list": [ 00:12:44.120 { 00:12:44.120 "name": "NewBaseBdev", 00:12:44.120 "uuid": "bc8ccb93-a0f0-4525-a0ca-c974c2d2907c", 00:12:44.120 "is_configured": true, 00:12:44.120 "data_offset": 0, 00:12:44.120 "data_size": 65536 00:12:44.120 }, 00:12:44.120 { 00:12:44.120 "name": "BaseBdev2", 00:12:44.120 "uuid": "fdd32e8a-ffc1-411c-b154-50ca7781ab73", 00:12:44.120 "is_configured": true, 00:12:44.120 "data_offset": 0, 00:12:44.120 "data_size": 65536 00:12:44.120 }, 00:12:44.120 { 00:12:44.120 "name": "BaseBdev3", 00:12:44.120 "uuid": "66121856-c7c1-4cb8-b59e-eefa54554b20", 00:12:44.120 "is_configured": true, 00:12:44.120 "data_offset": 0, 00:12:44.120 "data_size": 65536 00:12:44.120 } 00:12:44.120 ] 00:12:44.120 } 00:12:44.120 } 00:12:44.120 }' 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:44.120 BaseBdev2 00:12:44.120 BaseBdev3' 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.120 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.381 [2024-11-20 07:09:41.460539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.381 [2024-11-20 07:09:41.460797] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.381 [2024-11-20 07:09:41.461039] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.381 [2024-11-20 07:09:41.461127] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.381 [2024-11-20 07:09:41.461149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65550 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65550 ']' 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65550 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65550 00:12:44.381 killing process with pid 65550 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65550' 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65550 00:12:44.381 [2024-11-20 07:09:41.502549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.381 07:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65550 00:12:44.640 [2024-11-20 07:09:41.774267] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:45.576 07:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:45.576 00:12:45.576 real 0m11.885s 00:12:45.576 user 0m19.709s 00:12:45.576 sys 0m1.636s 00:12:45.576 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.576 ************************************ 00:12:45.576 END TEST raid_state_function_test 00:12:45.576 ************************************ 00:12:45.576 07:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.576 07:09:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:12:45.576 07:09:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:45.576 07:09:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.576 07:09:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:45.835 ************************************ 00:12:45.835 START TEST raid_state_function_test_sb 00:12:45.835 ************************************ 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:45.835 Process raid pid: 66184 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66184 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66184' 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66184 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66184 ']' 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.835 07:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.835 [2024-11-20 07:09:42.995716] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:12:45.835 [2024-11-20 07:09:42.996067] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.094 [2024-11-20 07:09:43.173481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.094 [2024-11-20 07:09:43.304470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.353 [2024-11-20 07:09:43.505616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.353 [2024-11-20 07:09:43.505671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.921 [2024-11-20 07:09:43.970802] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.921 [2024-11-20 07:09:43.971055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.921 [2024-11-20 07:09:43.971193] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.921 [2024-11-20 07:09:43.971275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.921 [2024-11-20 07:09:43.971391] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:46.921 [2024-11-20 07:09:43.971453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.921 07:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.921 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.921 "name": "Existed_Raid", 00:12:46.921 "uuid": "c9278f63-36ba-4e8a-a388-c7b2ec78f705", 00:12:46.921 "strip_size_kb": 64, 00:12:46.921 "state": "configuring", 00:12:46.921 "raid_level": "concat", 00:12:46.921 "superblock": true, 00:12:46.921 "num_base_bdevs": 3, 00:12:46.921 "num_base_bdevs_discovered": 0, 00:12:46.921 "num_base_bdevs_operational": 3, 00:12:46.921 "base_bdevs_list": [ 00:12:46.921 { 00:12:46.921 "name": "BaseBdev1", 00:12:46.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.921 "is_configured": false, 00:12:46.921 "data_offset": 0, 00:12:46.921 "data_size": 0 00:12:46.921 }, 00:12:46.921 { 00:12:46.921 "name": "BaseBdev2", 00:12:46.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.921 "is_configured": false, 00:12:46.921 "data_offset": 0, 00:12:46.921 "data_size": 0 00:12:46.921 }, 00:12:46.921 { 00:12:46.921 "name": "BaseBdev3", 00:12:46.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.921 "is_configured": false, 00:12:46.921 "data_offset": 0, 00:12:46.921 "data_size": 0 00:12:46.921 } 00:12:46.921 ] 00:12:46.921 }' 00:12:46.921 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.921 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.180 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:47.180 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.180 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.180 [2024-11-20 07:09:44.482935] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.180 [2024-11-20 07:09:44.482992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:47.180 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.180 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:47.180 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.180 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.180 [2024-11-20 07:09:44.490930] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:47.180 [2024-11-20 07:09:44.491133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:47.180 [2024-11-20 07:09:44.491304] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:47.180 [2024-11-20 07:09:44.491371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:47.180 [2024-11-20 07:09:44.491536] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:47.180 [2024-11-20 07:09:44.491597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:47.180 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.180 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:47.180 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.180 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.439 [2024-11-20 07:09:44.537397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:47.439 BaseBdev1 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.439 [ 00:12:47.439 { 00:12:47.439 "name": "BaseBdev1", 00:12:47.439 "aliases": [ 00:12:47.439 "d64c41e8-e86a-4998-b84b-678819560824" 00:12:47.439 ], 00:12:47.439 "product_name": "Malloc disk", 00:12:47.439 "block_size": 512, 00:12:47.439 "num_blocks": 65536, 00:12:47.439 "uuid": "d64c41e8-e86a-4998-b84b-678819560824", 00:12:47.439 "assigned_rate_limits": { 00:12:47.439 "rw_ios_per_sec": 0, 00:12:47.439 "rw_mbytes_per_sec": 0, 00:12:47.439 "r_mbytes_per_sec": 0, 00:12:47.439 "w_mbytes_per_sec": 0 00:12:47.439 }, 00:12:47.439 "claimed": true, 00:12:47.439 "claim_type": "exclusive_write", 00:12:47.439 "zoned": false, 00:12:47.439 "supported_io_types": { 00:12:47.439 "read": true, 00:12:47.439 "write": true, 00:12:47.439 "unmap": true, 00:12:47.439 "flush": true, 00:12:47.439 "reset": true, 00:12:47.439 "nvme_admin": false, 00:12:47.439 "nvme_io": false, 00:12:47.439 "nvme_io_md": false, 00:12:47.439 "write_zeroes": true, 00:12:47.439 "zcopy": true, 00:12:47.439 "get_zone_info": false, 00:12:47.439 "zone_management": false, 00:12:47.439 "zone_append": false, 00:12:47.439 "compare": false, 00:12:47.439 "compare_and_write": false, 00:12:47.439 "abort": true, 00:12:47.439 "seek_hole": false, 00:12:47.439 "seek_data": false, 00:12:47.439 "copy": true, 00:12:47.439 "nvme_iov_md": false 00:12:47.439 }, 00:12:47.439 "memory_domains": [ 00:12:47.439 { 00:12:47.439 "dma_device_id": "system", 00:12:47.439 "dma_device_type": 1 00:12:47.439 }, 00:12:47.439 { 00:12:47.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.439 "dma_device_type": 2 00:12:47.439 } 00:12:47.439 ], 00:12:47.439 "driver_specific": {} 00:12:47.439 } 00:12:47.439 ] 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.439 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.440 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.440 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.440 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.440 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.440 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.440 "name": "Existed_Raid", 00:12:47.440 "uuid": "cf8fe337-4f9a-4079-a65a-9109836ae8f3", 00:12:47.440 "strip_size_kb": 64, 00:12:47.440 "state": "configuring", 00:12:47.440 "raid_level": "concat", 00:12:47.440 "superblock": true, 00:12:47.440 "num_base_bdevs": 3, 00:12:47.440 "num_base_bdevs_discovered": 1, 00:12:47.440 "num_base_bdevs_operational": 3, 00:12:47.440 "base_bdevs_list": [ 00:12:47.440 { 00:12:47.440 "name": "BaseBdev1", 00:12:47.440 "uuid": "d64c41e8-e86a-4998-b84b-678819560824", 00:12:47.440 "is_configured": true, 00:12:47.440 "data_offset": 2048, 00:12:47.440 "data_size": 63488 00:12:47.440 }, 00:12:47.440 { 00:12:47.440 "name": "BaseBdev2", 00:12:47.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.440 "is_configured": false, 00:12:47.440 "data_offset": 0, 00:12:47.440 "data_size": 0 00:12:47.440 }, 00:12:47.440 { 00:12:47.440 "name": "BaseBdev3", 00:12:47.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.440 "is_configured": false, 00:12:47.440 "data_offset": 0, 00:12:47.440 "data_size": 0 00:12:47.440 } 00:12:47.440 ] 00:12:47.440 }' 00:12:47.440 07:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.440 07:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.035 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:48.035 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.035 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.035 [2024-11-20 07:09:45.097685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.035 [2024-11-20 07:09:45.097751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:48.035 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.035 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:48.035 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.035 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.035 [2024-11-20 07:09:45.105726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.035 [2024-11-20 07:09:45.108494] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.035 [2024-11-20 07:09:45.108692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.035 [2024-11-20 07:09:45.108849] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.035 [2024-11-20 07:09:45.108939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.035 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.035 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:48.035 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.036 "name": "Existed_Raid", 00:12:48.036 "uuid": "148ae763-192b-406f-8df6-15af5356a9f6", 00:12:48.036 "strip_size_kb": 64, 00:12:48.036 "state": "configuring", 00:12:48.036 "raid_level": "concat", 00:12:48.036 "superblock": true, 00:12:48.036 "num_base_bdevs": 3, 00:12:48.036 "num_base_bdevs_discovered": 1, 00:12:48.036 "num_base_bdevs_operational": 3, 00:12:48.036 "base_bdevs_list": [ 00:12:48.036 { 00:12:48.036 "name": "BaseBdev1", 00:12:48.036 "uuid": "d64c41e8-e86a-4998-b84b-678819560824", 00:12:48.036 "is_configured": true, 00:12:48.036 "data_offset": 2048, 00:12:48.036 "data_size": 63488 00:12:48.036 }, 00:12:48.036 { 00:12:48.036 "name": "BaseBdev2", 00:12:48.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.036 "is_configured": false, 00:12:48.036 "data_offset": 0, 00:12:48.036 "data_size": 0 00:12:48.036 }, 00:12:48.036 { 00:12:48.036 "name": "BaseBdev3", 00:12:48.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.036 "is_configured": false, 00:12:48.036 "data_offset": 0, 00:12:48.036 "data_size": 0 00:12:48.036 } 00:12:48.036 ] 00:12:48.036 }' 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.036 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.294 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:48.294 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.294 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.554 [2024-11-20 07:09:45.649433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:48.554 BaseBdev2 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.554 [ 00:12:48.554 { 00:12:48.554 "name": "BaseBdev2", 00:12:48.554 "aliases": [ 00:12:48.554 "c5968939-7db8-4d5a-8b7a-294b1c3df750" 00:12:48.554 ], 00:12:48.554 "product_name": "Malloc disk", 00:12:48.554 "block_size": 512, 00:12:48.554 "num_blocks": 65536, 00:12:48.554 "uuid": "c5968939-7db8-4d5a-8b7a-294b1c3df750", 00:12:48.554 "assigned_rate_limits": { 00:12:48.554 "rw_ios_per_sec": 0, 00:12:48.554 "rw_mbytes_per_sec": 0, 00:12:48.554 "r_mbytes_per_sec": 0, 00:12:48.554 "w_mbytes_per_sec": 0 00:12:48.554 }, 00:12:48.554 "claimed": true, 00:12:48.554 "claim_type": "exclusive_write", 00:12:48.554 "zoned": false, 00:12:48.554 "supported_io_types": { 00:12:48.554 "read": true, 00:12:48.554 "write": true, 00:12:48.554 "unmap": true, 00:12:48.554 "flush": true, 00:12:48.554 "reset": true, 00:12:48.554 "nvme_admin": false, 00:12:48.554 "nvme_io": false, 00:12:48.554 "nvme_io_md": false, 00:12:48.554 "write_zeroes": true, 00:12:48.554 "zcopy": true, 00:12:48.554 "get_zone_info": false, 00:12:48.554 "zone_management": false, 00:12:48.554 "zone_append": false, 00:12:48.554 "compare": false, 00:12:48.554 "compare_and_write": false, 00:12:48.554 "abort": true, 00:12:48.554 "seek_hole": false, 00:12:48.554 "seek_data": false, 00:12:48.554 "copy": true, 00:12:48.554 "nvme_iov_md": false 00:12:48.554 }, 00:12:48.554 "memory_domains": [ 00:12:48.554 { 00:12:48.554 "dma_device_id": "system", 00:12:48.554 "dma_device_type": 1 00:12:48.554 }, 00:12:48.554 { 00:12:48.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.554 "dma_device_type": 2 00:12:48.554 } 00:12:48.554 ], 00:12:48.554 "driver_specific": {} 00:12:48.554 } 00:12:48.554 ] 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.554 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.554 "name": "Existed_Raid", 00:12:48.554 "uuid": "148ae763-192b-406f-8df6-15af5356a9f6", 00:12:48.554 "strip_size_kb": 64, 00:12:48.554 "state": "configuring", 00:12:48.554 "raid_level": "concat", 00:12:48.554 "superblock": true, 00:12:48.554 "num_base_bdevs": 3, 00:12:48.554 "num_base_bdevs_discovered": 2, 00:12:48.554 "num_base_bdevs_operational": 3, 00:12:48.554 "base_bdevs_list": [ 00:12:48.554 { 00:12:48.554 "name": "BaseBdev1", 00:12:48.554 "uuid": "d64c41e8-e86a-4998-b84b-678819560824", 00:12:48.554 "is_configured": true, 00:12:48.554 "data_offset": 2048, 00:12:48.555 "data_size": 63488 00:12:48.555 }, 00:12:48.555 { 00:12:48.555 "name": "BaseBdev2", 00:12:48.555 "uuid": "c5968939-7db8-4d5a-8b7a-294b1c3df750", 00:12:48.555 "is_configured": true, 00:12:48.555 "data_offset": 2048, 00:12:48.555 "data_size": 63488 00:12:48.555 }, 00:12:48.555 { 00:12:48.555 "name": "BaseBdev3", 00:12:48.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.555 "is_configured": false, 00:12:48.555 "data_offset": 0, 00:12:48.555 "data_size": 0 00:12:48.555 } 00:12:48.555 ] 00:12:48.555 }' 00:12:48.555 07:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.555 07:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.122 [2024-11-20 07:09:46.269165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:49.122 [2024-11-20 07:09:46.269826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:49.122 [2024-11-20 07:09:46.269895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:49.122 BaseBdev3 00:12:49.122 [2024-11-20 07:09:46.270295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:49.122 [2024-11-20 07:09:46.270548] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:49.122 [2024-11-20 07:09:46.270566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.122 [2024-11-20 07:09:46.270751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.122 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.122 [ 00:12:49.122 { 00:12:49.122 "name": "BaseBdev3", 00:12:49.122 "aliases": [ 00:12:49.122 "de10bdc3-e515-4ec5-a72b-65d9576444f8" 00:12:49.122 ], 00:12:49.122 "product_name": "Malloc disk", 00:12:49.122 "block_size": 512, 00:12:49.122 "num_blocks": 65536, 00:12:49.122 "uuid": "de10bdc3-e515-4ec5-a72b-65d9576444f8", 00:12:49.122 "assigned_rate_limits": { 00:12:49.122 "rw_ios_per_sec": 0, 00:12:49.122 "rw_mbytes_per_sec": 0, 00:12:49.122 "r_mbytes_per_sec": 0, 00:12:49.122 "w_mbytes_per_sec": 0 00:12:49.122 }, 00:12:49.122 "claimed": true, 00:12:49.122 "claim_type": "exclusive_write", 00:12:49.123 "zoned": false, 00:12:49.123 "supported_io_types": { 00:12:49.123 "read": true, 00:12:49.123 "write": true, 00:12:49.123 "unmap": true, 00:12:49.123 "flush": true, 00:12:49.123 "reset": true, 00:12:49.123 "nvme_admin": false, 00:12:49.123 "nvme_io": false, 00:12:49.123 "nvme_io_md": false, 00:12:49.123 "write_zeroes": true, 00:12:49.123 "zcopy": true, 00:12:49.123 "get_zone_info": false, 00:12:49.123 "zone_management": false, 00:12:49.123 "zone_append": false, 00:12:49.123 "compare": false, 00:12:49.123 "compare_and_write": false, 00:12:49.123 "abort": true, 00:12:49.123 "seek_hole": false, 00:12:49.123 "seek_data": false, 00:12:49.123 "copy": true, 00:12:49.123 "nvme_iov_md": false 00:12:49.123 }, 00:12:49.123 "memory_domains": [ 00:12:49.123 { 00:12:49.123 "dma_device_id": "system", 00:12:49.123 "dma_device_type": 1 00:12:49.123 }, 00:12:49.123 { 00:12:49.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.123 "dma_device_type": 2 00:12:49.123 } 00:12:49.123 ], 00:12:49.123 "driver_specific": {} 00:12:49.123 } 00:12:49.123 ] 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.123 "name": "Existed_Raid", 00:12:49.123 "uuid": "148ae763-192b-406f-8df6-15af5356a9f6", 00:12:49.123 "strip_size_kb": 64, 00:12:49.123 "state": "online", 00:12:49.123 "raid_level": "concat", 00:12:49.123 "superblock": true, 00:12:49.123 "num_base_bdevs": 3, 00:12:49.123 "num_base_bdevs_discovered": 3, 00:12:49.123 "num_base_bdevs_operational": 3, 00:12:49.123 "base_bdevs_list": [ 00:12:49.123 { 00:12:49.123 "name": "BaseBdev1", 00:12:49.123 "uuid": "d64c41e8-e86a-4998-b84b-678819560824", 00:12:49.123 "is_configured": true, 00:12:49.123 "data_offset": 2048, 00:12:49.123 "data_size": 63488 00:12:49.123 }, 00:12:49.123 { 00:12:49.123 "name": "BaseBdev2", 00:12:49.123 "uuid": "c5968939-7db8-4d5a-8b7a-294b1c3df750", 00:12:49.123 "is_configured": true, 00:12:49.123 "data_offset": 2048, 00:12:49.123 "data_size": 63488 00:12:49.123 }, 00:12:49.123 { 00:12:49.123 "name": "BaseBdev3", 00:12:49.123 "uuid": "de10bdc3-e515-4ec5-a72b-65d9576444f8", 00:12:49.123 "is_configured": true, 00:12:49.123 "data_offset": 2048, 00:12:49.123 "data_size": 63488 00:12:49.123 } 00:12:49.123 ] 00:12:49.123 }' 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.123 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.688 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:49.688 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:49.688 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:49.688 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:49.688 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:49.688 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:49.688 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.689 [2024-11-20 07:09:46.825964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:49.689 "name": "Existed_Raid", 00:12:49.689 "aliases": [ 00:12:49.689 "148ae763-192b-406f-8df6-15af5356a9f6" 00:12:49.689 ], 00:12:49.689 "product_name": "Raid Volume", 00:12:49.689 "block_size": 512, 00:12:49.689 "num_blocks": 190464, 00:12:49.689 "uuid": "148ae763-192b-406f-8df6-15af5356a9f6", 00:12:49.689 "assigned_rate_limits": { 00:12:49.689 "rw_ios_per_sec": 0, 00:12:49.689 "rw_mbytes_per_sec": 0, 00:12:49.689 "r_mbytes_per_sec": 0, 00:12:49.689 "w_mbytes_per_sec": 0 00:12:49.689 }, 00:12:49.689 "claimed": false, 00:12:49.689 "zoned": false, 00:12:49.689 "supported_io_types": { 00:12:49.689 "read": true, 00:12:49.689 "write": true, 00:12:49.689 "unmap": true, 00:12:49.689 "flush": true, 00:12:49.689 "reset": true, 00:12:49.689 "nvme_admin": false, 00:12:49.689 "nvme_io": false, 00:12:49.689 "nvme_io_md": false, 00:12:49.689 "write_zeroes": true, 00:12:49.689 "zcopy": false, 00:12:49.689 "get_zone_info": false, 00:12:49.689 "zone_management": false, 00:12:49.689 "zone_append": false, 00:12:49.689 "compare": false, 00:12:49.689 "compare_and_write": false, 00:12:49.689 "abort": false, 00:12:49.689 "seek_hole": false, 00:12:49.689 "seek_data": false, 00:12:49.689 "copy": false, 00:12:49.689 "nvme_iov_md": false 00:12:49.689 }, 00:12:49.689 "memory_domains": [ 00:12:49.689 { 00:12:49.689 "dma_device_id": "system", 00:12:49.689 "dma_device_type": 1 00:12:49.689 }, 00:12:49.689 { 00:12:49.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.689 "dma_device_type": 2 00:12:49.689 }, 00:12:49.689 { 00:12:49.689 "dma_device_id": "system", 00:12:49.689 "dma_device_type": 1 00:12:49.689 }, 00:12:49.689 { 00:12:49.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.689 "dma_device_type": 2 00:12:49.689 }, 00:12:49.689 { 00:12:49.689 "dma_device_id": "system", 00:12:49.689 "dma_device_type": 1 00:12:49.689 }, 00:12:49.689 { 00:12:49.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.689 "dma_device_type": 2 00:12:49.689 } 00:12:49.689 ], 00:12:49.689 "driver_specific": { 00:12:49.689 "raid": { 00:12:49.689 "uuid": "148ae763-192b-406f-8df6-15af5356a9f6", 00:12:49.689 "strip_size_kb": 64, 00:12:49.689 "state": "online", 00:12:49.689 "raid_level": "concat", 00:12:49.689 "superblock": true, 00:12:49.689 "num_base_bdevs": 3, 00:12:49.689 "num_base_bdevs_discovered": 3, 00:12:49.689 "num_base_bdevs_operational": 3, 00:12:49.689 "base_bdevs_list": [ 00:12:49.689 { 00:12:49.689 "name": "BaseBdev1", 00:12:49.689 "uuid": "d64c41e8-e86a-4998-b84b-678819560824", 00:12:49.689 "is_configured": true, 00:12:49.689 "data_offset": 2048, 00:12:49.689 "data_size": 63488 00:12:49.689 }, 00:12:49.689 { 00:12:49.689 "name": "BaseBdev2", 00:12:49.689 "uuid": "c5968939-7db8-4d5a-8b7a-294b1c3df750", 00:12:49.689 "is_configured": true, 00:12:49.689 "data_offset": 2048, 00:12:49.689 "data_size": 63488 00:12:49.689 }, 00:12:49.689 { 00:12:49.689 "name": "BaseBdev3", 00:12:49.689 "uuid": "de10bdc3-e515-4ec5-a72b-65d9576444f8", 00:12:49.689 "is_configured": true, 00:12:49.689 "data_offset": 2048, 00:12:49.689 "data_size": 63488 00:12:49.689 } 00:12:49.689 ] 00:12:49.689 } 00:12:49.689 } 00:12:49.689 }' 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:49.689 BaseBdev2 00:12:49.689 BaseBdev3' 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.689 07:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.947 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.948 [2024-11-20 07:09:47.157681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:49.948 [2024-11-20 07:09:47.158071] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.948 [2024-11-20 07:09:47.158200] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.948 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.206 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.206 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.206 "name": "Existed_Raid", 00:12:50.206 "uuid": "148ae763-192b-406f-8df6-15af5356a9f6", 00:12:50.206 "strip_size_kb": 64, 00:12:50.206 "state": "offline", 00:12:50.206 "raid_level": "concat", 00:12:50.206 "superblock": true, 00:12:50.206 "num_base_bdevs": 3, 00:12:50.206 "num_base_bdevs_discovered": 2, 00:12:50.206 "num_base_bdevs_operational": 2, 00:12:50.206 "base_bdevs_list": [ 00:12:50.206 { 00:12:50.206 "name": null, 00:12:50.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.206 "is_configured": false, 00:12:50.206 "data_offset": 0, 00:12:50.206 "data_size": 63488 00:12:50.206 }, 00:12:50.206 { 00:12:50.206 "name": "BaseBdev2", 00:12:50.206 "uuid": "c5968939-7db8-4d5a-8b7a-294b1c3df750", 00:12:50.206 "is_configured": true, 00:12:50.206 "data_offset": 2048, 00:12:50.206 "data_size": 63488 00:12:50.206 }, 00:12:50.206 { 00:12:50.206 "name": "BaseBdev3", 00:12:50.206 "uuid": "de10bdc3-e515-4ec5-a72b-65d9576444f8", 00:12:50.206 "is_configured": true, 00:12:50.206 "data_offset": 2048, 00:12:50.206 "data_size": 63488 00:12:50.206 } 00:12:50.206 ] 00:12:50.206 }' 00:12:50.206 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.206 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.464 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:50.464 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.464 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.464 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.464 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:50.464 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.464 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.724 [2024-11-20 07:09:47.817802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.724 07:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.724 [2024-11-20 07:09:47.967535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:50.724 [2024-11-20 07:09:47.967908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.984 BaseBdev2 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.984 [ 00:12:50.984 { 00:12:50.984 "name": "BaseBdev2", 00:12:50.984 "aliases": [ 00:12:50.984 "c5aeabd1-e925-416d-9eda-31bfbd9d58b4" 00:12:50.984 ], 00:12:50.984 "product_name": "Malloc disk", 00:12:50.984 "block_size": 512, 00:12:50.984 "num_blocks": 65536, 00:12:50.984 "uuid": "c5aeabd1-e925-416d-9eda-31bfbd9d58b4", 00:12:50.984 "assigned_rate_limits": { 00:12:50.984 "rw_ios_per_sec": 0, 00:12:50.984 "rw_mbytes_per_sec": 0, 00:12:50.984 "r_mbytes_per_sec": 0, 00:12:50.984 "w_mbytes_per_sec": 0 00:12:50.984 }, 00:12:50.984 "claimed": false, 00:12:50.984 "zoned": false, 00:12:50.984 "supported_io_types": { 00:12:50.984 "read": true, 00:12:50.984 "write": true, 00:12:50.984 "unmap": true, 00:12:50.984 "flush": true, 00:12:50.984 "reset": true, 00:12:50.984 "nvme_admin": false, 00:12:50.984 "nvme_io": false, 00:12:50.984 "nvme_io_md": false, 00:12:50.984 "write_zeroes": true, 00:12:50.984 "zcopy": true, 00:12:50.984 "get_zone_info": false, 00:12:50.984 "zone_management": false, 00:12:50.984 "zone_append": false, 00:12:50.984 "compare": false, 00:12:50.984 "compare_and_write": false, 00:12:50.984 "abort": true, 00:12:50.984 "seek_hole": false, 00:12:50.984 "seek_data": false, 00:12:50.984 "copy": true, 00:12:50.984 "nvme_iov_md": false 00:12:50.984 }, 00:12:50.984 "memory_domains": [ 00:12:50.984 { 00:12:50.984 "dma_device_id": "system", 00:12:50.984 "dma_device_type": 1 00:12:50.984 }, 00:12:50.984 { 00:12:50.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.984 "dma_device_type": 2 00:12:50.984 } 00:12:50.984 ], 00:12:50.984 "driver_specific": {} 00:12:50.984 } 00:12:50.984 ] 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:50.984 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.985 BaseBdev3 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.985 [ 00:12:50.985 { 00:12:50.985 "name": "BaseBdev3", 00:12:50.985 "aliases": [ 00:12:50.985 "1ba5231e-2d10-46c2-a145-965435b58a1a" 00:12:50.985 ], 00:12:50.985 "product_name": "Malloc disk", 00:12:50.985 "block_size": 512, 00:12:50.985 "num_blocks": 65536, 00:12:50.985 "uuid": "1ba5231e-2d10-46c2-a145-965435b58a1a", 00:12:50.985 "assigned_rate_limits": { 00:12:50.985 "rw_ios_per_sec": 0, 00:12:50.985 "rw_mbytes_per_sec": 0, 00:12:50.985 "r_mbytes_per_sec": 0, 00:12:50.985 "w_mbytes_per_sec": 0 00:12:50.985 }, 00:12:50.985 "claimed": false, 00:12:50.985 "zoned": false, 00:12:50.985 "supported_io_types": { 00:12:50.985 "read": true, 00:12:50.985 "write": true, 00:12:50.985 "unmap": true, 00:12:50.985 "flush": true, 00:12:50.985 "reset": true, 00:12:50.985 "nvme_admin": false, 00:12:50.985 "nvme_io": false, 00:12:50.985 "nvme_io_md": false, 00:12:50.985 "write_zeroes": true, 00:12:50.985 "zcopy": true, 00:12:50.985 "get_zone_info": false, 00:12:50.985 "zone_management": false, 00:12:50.985 "zone_append": false, 00:12:50.985 "compare": false, 00:12:50.985 "compare_and_write": false, 00:12:50.985 "abort": true, 00:12:50.985 "seek_hole": false, 00:12:50.985 "seek_data": false, 00:12:50.985 "copy": true, 00:12:50.985 "nvme_iov_md": false 00:12:50.985 }, 00:12:50.985 "memory_domains": [ 00:12:50.985 { 00:12:50.985 "dma_device_id": "system", 00:12:50.985 "dma_device_type": 1 00:12:50.985 }, 00:12:50.985 { 00:12:50.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.985 "dma_device_type": 2 00:12:50.985 } 00:12:50.985 ], 00:12:50.985 "driver_specific": {} 00:12:50.985 } 00:12:50.985 ] 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.985 [2024-11-20 07:09:48.280297] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:50.985 [2024-11-20 07:09:48.280597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:50.985 [2024-11-20 07:09:48.280679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.985 [2024-11-20 07:09:48.283389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.985 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.243 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.243 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.243 "name": "Existed_Raid", 00:12:51.243 "uuid": "6c0981b1-e1c4-4f9f-8fa2-c61b296faf39", 00:12:51.243 "strip_size_kb": 64, 00:12:51.243 "state": "configuring", 00:12:51.243 "raid_level": "concat", 00:12:51.243 "superblock": true, 00:12:51.243 "num_base_bdevs": 3, 00:12:51.243 "num_base_bdevs_discovered": 2, 00:12:51.243 "num_base_bdevs_operational": 3, 00:12:51.243 "base_bdevs_list": [ 00:12:51.243 { 00:12:51.243 "name": "BaseBdev1", 00:12:51.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.243 "is_configured": false, 00:12:51.243 "data_offset": 0, 00:12:51.243 "data_size": 0 00:12:51.243 }, 00:12:51.243 { 00:12:51.243 "name": "BaseBdev2", 00:12:51.243 "uuid": "c5aeabd1-e925-416d-9eda-31bfbd9d58b4", 00:12:51.243 "is_configured": true, 00:12:51.243 "data_offset": 2048, 00:12:51.243 "data_size": 63488 00:12:51.243 }, 00:12:51.243 { 00:12:51.243 "name": "BaseBdev3", 00:12:51.243 "uuid": "1ba5231e-2d10-46c2-a145-965435b58a1a", 00:12:51.243 "is_configured": true, 00:12:51.243 "data_offset": 2048, 00:12:51.243 "data_size": 63488 00:12:51.243 } 00:12:51.243 ] 00:12:51.243 }' 00:12:51.243 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.243 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.501 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:51.501 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.502 [2024-11-20 07:09:48.800530] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.502 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.760 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.760 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.760 "name": "Existed_Raid", 00:12:51.760 "uuid": "6c0981b1-e1c4-4f9f-8fa2-c61b296faf39", 00:12:51.760 "strip_size_kb": 64, 00:12:51.760 "state": "configuring", 00:12:51.760 "raid_level": "concat", 00:12:51.760 "superblock": true, 00:12:51.760 "num_base_bdevs": 3, 00:12:51.760 "num_base_bdevs_discovered": 1, 00:12:51.760 "num_base_bdevs_operational": 3, 00:12:51.760 "base_bdevs_list": [ 00:12:51.760 { 00:12:51.760 "name": "BaseBdev1", 00:12:51.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.760 "is_configured": false, 00:12:51.760 "data_offset": 0, 00:12:51.760 "data_size": 0 00:12:51.760 }, 00:12:51.760 { 00:12:51.760 "name": null, 00:12:51.760 "uuid": "c5aeabd1-e925-416d-9eda-31bfbd9d58b4", 00:12:51.760 "is_configured": false, 00:12:51.760 "data_offset": 0, 00:12:51.760 "data_size": 63488 00:12:51.760 }, 00:12:51.760 { 00:12:51.760 "name": "BaseBdev3", 00:12:51.760 "uuid": "1ba5231e-2d10-46c2-a145-965435b58a1a", 00:12:51.760 "is_configured": true, 00:12:51.760 "data_offset": 2048, 00:12:51.760 "data_size": 63488 00:12:51.760 } 00:12:51.760 ] 00:12:51.760 }' 00:12:51.760 07:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.760 07:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.019 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.019 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.019 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.019 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:52.019 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.278 [2024-11-20 07:09:49.391382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.278 BaseBdev1 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.278 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.278 [ 00:12:52.278 { 00:12:52.278 "name": "BaseBdev1", 00:12:52.278 "aliases": [ 00:12:52.278 "3572ff75-6f85-43b9-96ce-d6a874b8784d" 00:12:52.278 ], 00:12:52.278 "product_name": "Malloc disk", 00:12:52.278 "block_size": 512, 00:12:52.278 "num_blocks": 65536, 00:12:52.278 "uuid": "3572ff75-6f85-43b9-96ce-d6a874b8784d", 00:12:52.278 "assigned_rate_limits": { 00:12:52.278 "rw_ios_per_sec": 0, 00:12:52.278 "rw_mbytes_per_sec": 0, 00:12:52.278 "r_mbytes_per_sec": 0, 00:12:52.278 "w_mbytes_per_sec": 0 00:12:52.278 }, 00:12:52.278 "claimed": true, 00:12:52.278 "claim_type": "exclusive_write", 00:12:52.278 "zoned": false, 00:12:52.278 "supported_io_types": { 00:12:52.278 "read": true, 00:12:52.278 "write": true, 00:12:52.278 "unmap": true, 00:12:52.278 "flush": true, 00:12:52.278 "reset": true, 00:12:52.278 "nvme_admin": false, 00:12:52.278 "nvme_io": false, 00:12:52.278 "nvme_io_md": false, 00:12:52.278 "write_zeroes": true, 00:12:52.278 "zcopy": true, 00:12:52.278 "get_zone_info": false, 00:12:52.278 "zone_management": false, 00:12:52.278 "zone_append": false, 00:12:52.278 "compare": false, 00:12:52.278 "compare_and_write": false, 00:12:52.278 "abort": true, 00:12:52.278 "seek_hole": false, 00:12:52.278 "seek_data": false, 00:12:52.278 "copy": true, 00:12:52.278 "nvme_iov_md": false 00:12:52.278 }, 00:12:52.279 "memory_domains": [ 00:12:52.279 { 00:12:52.279 "dma_device_id": "system", 00:12:52.279 "dma_device_type": 1 00:12:52.279 }, 00:12:52.279 { 00:12:52.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.279 "dma_device_type": 2 00:12:52.279 } 00:12:52.279 ], 00:12:52.279 "driver_specific": {} 00:12:52.279 } 00:12:52.279 ] 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.279 "name": "Existed_Raid", 00:12:52.279 "uuid": "6c0981b1-e1c4-4f9f-8fa2-c61b296faf39", 00:12:52.279 "strip_size_kb": 64, 00:12:52.279 "state": "configuring", 00:12:52.279 "raid_level": "concat", 00:12:52.279 "superblock": true, 00:12:52.279 "num_base_bdevs": 3, 00:12:52.279 "num_base_bdevs_discovered": 2, 00:12:52.279 "num_base_bdevs_operational": 3, 00:12:52.279 "base_bdevs_list": [ 00:12:52.279 { 00:12:52.279 "name": "BaseBdev1", 00:12:52.279 "uuid": "3572ff75-6f85-43b9-96ce-d6a874b8784d", 00:12:52.279 "is_configured": true, 00:12:52.279 "data_offset": 2048, 00:12:52.279 "data_size": 63488 00:12:52.279 }, 00:12:52.279 { 00:12:52.279 "name": null, 00:12:52.279 "uuid": "c5aeabd1-e925-416d-9eda-31bfbd9d58b4", 00:12:52.279 "is_configured": false, 00:12:52.279 "data_offset": 0, 00:12:52.279 "data_size": 63488 00:12:52.279 }, 00:12:52.279 { 00:12:52.279 "name": "BaseBdev3", 00:12:52.279 "uuid": "1ba5231e-2d10-46c2-a145-965435b58a1a", 00:12:52.279 "is_configured": true, 00:12:52.279 "data_offset": 2048, 00:12:52.279 "data_size": 63488 00:12:52.279 } 00:12:52.279 ] 00:12:52.279 }' 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.279 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.846 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:52.846 07:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.846 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.846 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.846 07:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.846 [2024-11-20 07:09:50.015735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.846 "name": "Existed_Raid", 00:12:52.846 "uuid": "6c0981b1-e1c4-4f9f-8fa2-c61b296faf39", 00:12:52.846 "strip_size_kb": 64, 00:12:52.846 "state": "configuring", 00:12:52.846 "raid_level": "concat", 00:12:52.846 "superblock": true, 00:12:52.846 "num_base_bdevs": 3, 00:12:52.846 "num_base_bdevs_discovered": 1, 00:12:52.846 "num_base_bdevs_operational": 3, 00:12:52.846 "base_bdevs_list": [ 00:12:52.846 { 00:12:52.846 "name": "BaseBdev1", 00:12:52.846 "uuid": "3572ff75-6f85-43b9-96ce-d6a874b8784d", 00:12:52.846 "is_configured": true, 00:12:52.846 "data_offset": 2048, 00:12:52.846 "data_size": 63488 00:12:52.846 }, 00:12:52.846 { 00:12:52.846 "name": null, 00:12:52.846 "uuid": "c5aeabd1-e925-416d-9eda-31bfbd9d58b4", 00:12:52.846 "is_configured": false, 00:12:52.846 "data_offset": 0, 00:12:52.846 "data_size": 63488 00:12:52.846 }, 00:12:52.846 { 00:12:52.846 "name": null, 00:12:52.846 "uuid": "1ba5231e-2d10-46c2-a145-965435b58a1a", 00:12:52.846 "is_configured": false, 00:12:52.846 "data_offset": 0, 00:12:52.846 "data_size": 63488 00:12:52.846 } 00:12:52.846 ] 00:12:52.846 }' 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.846 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.414 [2024-11-20 07:09:50.567910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.414 "name": "Existed_Raid", 00:12:53.414 "uuid": "6c0981b1-e1c4-4f9f-8fa2-c61b296faf39", 00:12:53.414 "strip_size_kb": 64, 00:12:53.414 "state": "configuring", 00:12:53.414 "raid_level": "concat", 00:12:53.414 "superblock": true, 00:12:53.414 "num_base_bdevs": 3, 00:12:53.414 "num_base_bdevs_discovered": 2, 00:12:53.414 "num_base_bdevs_operational": 3, 00:12:53.414 "base_bdevs_list": [ 00:12:53.414 { 00:12:53.414 "name": "BaseBdev1", 00:12:53.414 "uuid": "3572ff75-6f85-43b9-96ce-d6a874b8784d", 00:12:53.414 "is_configured": true, 00:12:53.414 "data_offset": 2048, 00:12:53.414 "data_size": 63488 00:12:53.414 }, 00:12:53.414 { 00:12:53.414 "name": null, 00:12:53.414 "uuid": "c5aeabd1-e925-416d-9eda-31bfbd9d58b4", 00:12:53.414 "is_configured": false, 00:12:53.414 "data_offset": 0, 00:12:53.414 "data_size": 63488 00:12:53.414 }, 00:12:53.414 { 00:12:53.414 "name": "BaseBdev3", 00:12:53.414 "uuid": "1ba5231e-2d10-46c2-a145-965435b58a1a", 00:12:53.414 "is_configured": true, 00:12:53.414 "data_offset": 2048, 00:12:53.414 "data_size": 63488 00:12:53.414 } 00:12:53.414 ] 00:12:53.414 }' 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.414 07:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.981 [2024-11-20 07:09:51.152114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:53.981 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.982 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.982 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.982 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.982 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.982 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.982 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.982 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.982 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.982 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.982 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.240 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.240 "name": "Existed_Raid", 00:12:54.240 "uuid": "6c0981b1-e1c4-4f9f-8fa2-c61b296faf39", 00:12:54.240 "strip_size_kb": 64, 00:12:54.240 "state": "configuring", 00:12:54.240 "raid_level": "concat", 00:12:54.240 "superblock": true, 00:12:54.240 "num_base_bdevs": 3, 00:12:54.240 "num_base_bdevs_discovered": 1, 00:12:54.240 "num_base_bdevs_operational": 3, 00:12:54.240 "base_bdevs_list": [ 00:12:54.240 { 00:12:54.240 "name": null, 00:12:54.240 "uuid": "3572ff75-6f85-43b9-96ce-d6a874b8784d", 00:12:54.240 "is_configured": false, 00:12:54.240 "data_offset": 0, 00:12:54.240 "data_size": 63488 00:12:54.240 }, 00:12:54.240 { 00:12:54.240 "name": null, 00:12:54.240 "uuid": "c5aeabd1-e925-416d-9eda-31bfbd9d58b4", 00:12:54.240 "is_configured": false, 00:12:54.240 "data_offset": 0, 00:12:54.240 "data_size": 63488 00:12:54.240 }, 00:12:54.240 { 00:12:54.240 "name": "BaseBdev3", 00:12:54.240 "uuid": "1ba5231e-2d10-46c2-a145-965435b58a1a", 00:12:54.240 "is_configured": true, 00:12:54.240 "data_offset": 2048, 00:12:54.240 "data_size": 63488 00:12:54.240 } 00:12:54.240 ] 00:12:54.240 }' 00:12:54.240 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.240 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.518 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.518 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:54.518 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.518 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.809 [2024-11-20 07:09:51.880158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.809 "name": "Existed_Raid", 00:12:54.809 "uuid": "6c0981b1-e1c4-4f9f-8fa2-c61b296faf39", 00:12:54.809 "strip_size_kb": 64, 00:12:54.809 "state": "configuring", 00:12:54.809 "raid_level": "concat", 00:12:54.809 "superblock": true, 00:12:54.809 "num_base_bdevs": 3, 00:12:54.809 "num_base_bdevs_discovered": 2, 00:12:54.809 "num_base_bdevs_operational": 3, 00:12:54.809 "base_bdevs_list": [ 00:12:54.809 { 00:12:54.809 "name": null, 00:12:54.809 "uuid": "3572ff75-6f85-43b9-96ce-d6a874b8784d", 00:12:54.809 "is_configured": false, 00:12:54.809 "data_offset": 0, 00:12:54.809 "data_size": 63488 00:12:54.809 }, 00:12:54.809 { 00:12:54.809 "name": "BaseBdev2", 00:12:54.809 "uuid": "c5aeabd1-e925-416d-9eda-31bfbd9d58b4", 00:12:54.809 "is_configured": true, 00:12:54.809 "data_offset": 2048, 00:12:54.809 "data_size": 63488 00:12:54.809 }, 00:12:54.809 { 00:12:54.809 "name": "BaseBdev3", 00:12:54.809 "uuid": "1ba5231e-2d10-46c2-a145-965435b58a1a", 00:12:54.809 "is_configured": true, 00:12:54.809 "data_offset": 2048, 00:12:54.809 "data_size": 63488 00:12:54.809 } 00:12:54.809 ] 00:12:54.809 }' 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.809 07:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3572ff75-6f85-43b9-96ce-d6a874b8784d 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.377 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.377 [2024-11-20 07:09:52.522037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:55.377 NewBaseBdev 00:12:55.377 [2024-11-20 07:09:52.522471] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:55.377 [2024-11-20 07:09:52.522503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:55.377 [2024-11-20 07:09:52.522817] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:55.377 [2024-11-20 07:09:52.523026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:55.377 [2024-11-20 07:09:52.523044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:55.377 [2024-11-20 07:09:52.523211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.378 [ 00:12:55.378 { 00:12:55.378 "name": "NewBaseBdev", 00:12:55.378 "aliases": [ 00:12:55.378 "3572ff75-6f85-43b9-96ce-d6a874b8784d" 00:12:55.378 ], 00:12:55.378 "product_name": "Malloc disk", 00:12:55.378 "block_size": 512, 00:12:55.378 "num_blocks": 65536, 00:12:55.378 "uuid": "3572ff75-6f85-43b9-96ce-d6a874b8784d", 00:12:55.378 "assigned_rate_limits": { 00:12:55.378 "rw_ios_per_sec": 0, 00:12:55.378 "rw_mbytes_per_sec": 0, 00:12:55.378 "r_mbytes_per_sec": 0, 00:12:55.378 "w_mbytes_per_sec": 0 00:12:55.378 }, 00:12:55.378 "claimed": true, 00:12:55.378 "claim_type": "exclusive_write", 00:12:55.378 "zoned": false, 00:12:55.378 "supported_io_types": { 00:12:55.378 "read": true, 00:12:55.378 "write": true, 00:12:55.378 "unmap": true, 00:12:55.378 "flush": true, 00:12:55.378 "reset": true, 00:12:55.378 "nvme_admin": false, 00:12:55.378 "nvme_io": false, 00:12:55.378 "nvme_io_md": false, 00:12:55.378 "write_zeroes": true, 00:12:55.378 "zcopy": true, 00:12:55.378 "get_zone_info": false, 00:12:55.378 "zone_management": false, 00:12:55.378 "zone_append": false, 00:12:55.378 "compare": false, 00:12:55.378 "compare_and_write": false, 00:12:55.378 "abort": true, 00:12:55.378 "seek_hole": false, 00:12:55.378 "seek_data": false, 00:12:55.378 "copy": true, 00:12:55.378 "nvme_iov_md": false 00:12:55.378 }, 00:12:55.378 "memory_domains": [ 00:12:55.378 { 00:12:55.378 "dma_device_id": "system", 00:12:55.378 "dma_device_type": 1 00:12:55.378 }, 00:12:55.378 { 00:12:55.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.378 "dma_device_type": 2 00:12:55.378 } 00:12:55.378 ], 00:12:55.378 "driver_specific": {} 00:12:55.378 } 00:12:55.378 ] 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.378 "name": "Existed_Raid", 00:12:55.378 "uuid": "6c0981b1-e1c4-4f9f-8fa2-c61b296faf39", 00:12:55.378 "strip_size_kb": 64, 00:12:55.378 "state": "online", 00:12:55.378 "raid_level": "concat", 00:12:55.378 "superblock": true, 00:12:55.378 "num_base_bdevs": 3, 00:12:55.378 "num_base_bdevs_discovered": 3, 00:12:55.378 "num_base_bdevs_operational": 3, 00:12:55.378 "base_bdevs_list": [ 00:12:55.378 { 00:12:55.378 "name": "NewBaseBdev", 00:12:55.378 "uuid": "3572ff75-6f85-43b9-96ce-d6a874b8784d", 00:12:55.378 "is_configured": true, 00:12:55.378 "data_offset": 2048, 00:12:55.378 "data_size": 63488 00:12:55.378 }, 00:12:55.378 { 00:12:55.378 "name": "BaseBdev2", 00:12:55.378 "uuid": "c5aeabd1-e925-416d-9eda-31bfbd9d58b4", 00:12:55.378 "is_configured": true, 00:12:55.378 "data_offset": 2048, 00:12:55.378 "data_size": 63488 00:12:55.378 }, 00:12:55.378 { 00:12:55.378 "name": "BaseBdev3", 00:12:55.378 "uuid": "1ba5231e-2d10-46c2-a145-965435b58a1a", 00:12:55.378 "is_configured": true, 00:12:55.378 "data_offset": 2048, 00:12:55.378 "data_size": 63488 00:12:55.378 } 00:12:55.378 ] 00:12:55.378 }' 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.378 07:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.946 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:55.946 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:55.946 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.946 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.946 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.946 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.946 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:55.946 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.946 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.946 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.946 [2024-11-20 07:09:53.030600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.946 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.946 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.946 "name": "Existed_Raid", 00:12:55.946 "aliases": [ 00:12:55.946 "6c0981b1-e1c4-4f9f-8fa2-c61b296faf39" 00:12:55.946 ], 00:12:55.946 "product_name": "Raid Volume", 00:12:55.946 "block_size": 512, 00:12:55.946 "num_blocks": 190464, 00:12:55.946 "uuid": "6c0981b1-e1c4-4f9f-8fa2-c61b296faf39", 00:12:55.946 "assigned_rate_limits": { 00:12:55.946 "rw_ios_per_sec": 0, 00:12:55.946 "rw_mbytes_per_sec": 0, 00:12:55.946 "r_mbytes_per_sec": 0, 00:12:55.946 "w_mbytes_per_sec": 0 00:12:55.946 }, 00:12:55.946 "claimed": false, 00:12:55.946 "zoned": false, 00:12:55.946 "supported_io_types": { 00:12:55.946 "read": true, 00:12:55.946 "write": true, 00:12:55.946 "unmap": true, 00:12:55.946 "flush": true, 00:12:55.946 "reset": true, 00:12:55.946 "nvme_admin": false, 00:12:55.946 "nvme_io": false, 00:12:55.946 "nvme_io_md": false, 00:12:55.946 "write_zeroes": true, 00:12:55.946 "zcopy": false, 00:12:55.946 "get_zone_info": false, 00:12:55.946 "zone_management": false, 00:12:55.946 "zone_append": false, 00:12:55.946 "compare": false, 00:12:55.946 "compare_and_write": false, 00:12:55.946 "abort": false, 00:12:55.946 "seek_hole": false, 00:12:55.946 "seek_data": false, 00:12:55.946 "copy": false, 00:12:55.946 "nvme_iov_md": false 00:12:55.946 }, 00:12:55.946 "memory_domains": [ 00:12:55.946 { 00:12:55.946 "dma_device_id": "system", 00:12:55.946 "dma_device_type": 1 00:12:55.946 }, 00:12:55.946 { 00:12:55.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.946 "dma_device_type": 2 00:12:55.946 }, 00:12:55.946 { 00:12:55.946 "dma_device_id": "system", 00:12:55.946 "dma_device_type": 1 00:12:55.946 }, 00:12:55.946 { 00:12:55.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.947 "dma_device_type": 2 00:12:55.947 }, 00:12:55.947 { 00:12:55.947 "dma_device_id": "system", 00:12:55.947 "dma_device_type": 1 00:12:55.947 }, 00:12:55.947 { 00:12:55.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.947 "dma_device_type": 2 00:12:55.947 } 00:12:55.947 ], 00:12:55.947 "driver_specific": { 00:12:55.947 "raid": { 00:12:55.947 "uuid": "6c0981b1-e1c4-4f9f-8fa2-c61b296faf39", 00:12:55.947 "strip_size_kb": 64, 00:12:55.947 "state": "online", 00:12:55.947 "raid_level": "concat", 00:12:55.947 "superblock": true, 00:12:55.947 "num_base_bdevs": 3, 00:12:55.947 "num_base_bdevs_discovered": 3, 00:12:55.947 "num_base_bdevs_operational": 3, 00:12:55.947 "base_bdevs_list": [ 00:12:55.947 { 00:12:55.947 "name": "NewBaseBdev", 00:12:55.947 "uuid": "3572ff75-6f85-43b9-96ce-d6a874b8784d", 00:12:55.947 "is_configured": true, 00:12:55.947 "data_offset": 2048, 00:12:55.947 "data_size": 63488 00:12:55.947 }, 00:12:55.947 { 00:12:55.947 "name": "BaseBdev2", 00:12:55.947 "uuid": "c5aeabd1-e925-416d-9eda-31bfbd9d58b4", 00:12:55.947 "is_configured": true, 00:12:55.947 "data_offset": 2048, 00:12:55.947 "data_size": 63488 00:12:55.947 }, 00:12:55.947 { 00:12:55.947 "name": "BaseBdev3", 00:12:55.947 "uuid": "1ba5231e-2d10-46c2-a145-965435b58a1a", 00:12:55.947 "is_configured": true, 00:12:55.947 "data_offset": 2048, 00:12:55.947 "data_size": 63488 00:12:55.947 } 00:12:55.947 ] 00:12:55.947 } 00:12:55.947 } 00:12:55.947 }' 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:55.947 BaseBdev2 00:12:55.947 BaseBdev3' 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.947 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.205 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.205 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.205 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:56.205 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:56.205 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.206 [2024-11-20 07:09:53.362336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.206 [2024-11-20 07:09:53.362374] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.206 [2024-11-20 07:09:53.362478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.206 [2024-11-20 07:09:53.362557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.206 [2024-11-20 07:09:53.362578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66184 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66184 ']' 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66184 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66184 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.206 killing process with pid 66184 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66184' 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66184 00:12:56.206 [2024-11-20 07:09:53.399492] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.206 07:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66184 00:12:56.464 [2024-11-20 07:09:53.670329] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:57.400 07:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:57.400 00:12:57.400 real 0m11.810s 00:12:57.400 user 0m19.544s 00:12:57.400 sys 0m1.599s 00:12:57.400 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.400 ************************************ 00:12:57.400 07:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.400 END TEST raid_state_function_test_sb 00:12:57.400 ************************************ 00:12:57.658 07:09:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:12:57.658 07:09:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:57.658 07:09:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.658 07:09:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:57.658 ************************************ 00:12:57.658 START TEST raid_superblock_test 00:12:57.658 ************************************ 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66821 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66821 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66821 ']' 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.658 07:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.658 [2024-11-20 07:09:54.852596] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:12:57.658 [2024-11-20 07:09:54.852751] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66821 ] 00:12:57.916 [2024-11-20 07:09:55.029286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.916 [2024-11-20 07:09:55.156781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.175 [2024-11-20 07:09:55.359327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.175 [2024-11-20 07:09:55.359407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:58.743 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.744 malloc1 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.744 [2024-11-20 07:09:55.939091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:58.744 [2024-11-20 07:09:55.939168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.744 [2024-11-20 07:09:55.939200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:58.744 [2024-11-20 07:09:55.939214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.744 [2024-11-20 07:09:55.941994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.744 [2024-11-20 07:09:55.942037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:58.744 pt1 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.744 malloc2 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.744 [2024-11-20 07:09:55.994890] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:58.744 [2024-11-20 07:09:55.994956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.744 [2024-11-20 07:09:55.994996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:58.744 [2024-11-20 07:09:55.995011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.744 [2024-11-20 07:09:55.997819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.744 [2024-11-20 07:09:55.997895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:58.744 pt2 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:58.744 07:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:58.744 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:58.744 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.744 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.744 malloc3 00:12:58.744 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.744 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:58.744 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.744 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.744 [2024-11-20 07:09:56.056483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:58.744 [2024-11-20 07:09:56.056547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.744 [2024-11-20 07:09:56.056578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:58.744 [2024-11-20 07:09:56.056593] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.744 [2024-11-20 07:09:56.059375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.744 [2024-11-20 07:09:56.059416] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:58.744 pt3 00:12:58.744 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.003 [2024-11-20 07:09:56.064529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:59.003 [2024-11-20 07:09:56.066940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:59.003 [2024-11-20 07:09:56.067035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:59.003 [2024-11-20 07:09:56.067243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:59.003 [2024-11-20 07:09:56.067266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:12:59.003 [2024-11-20 07:09:56.067584] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:59.003 [2024-11-20 07:09:56.067827] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:59.003 [2024-11-20 07:09:56.067844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:59.003 [2024-11-20 07:09:56.068046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.003 "name": "raid_bdev1", 00:12:59.003 "uuid": "36a225d2-0710-4e08-9d3c-03ec969e93ce", 00:12:59.003 "strip_size_kb": 64, 00:12:59.003 "state": "online", 00:12:59.003 "raid_level": "concat", 00:12:59.003 "superblock": true, 00:12:59.003 "num_base_bdevs": 3, 00:12:59.003 "num_base_bdevs_discovered": 3, 00:12:59.003 "num_base_bdevs_operational": 3, 00:12:59.003 "base_bdevs_list": [ 00:12:59.003 { 00:12:59.003 "name": "pt1", 00:12:59.003 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.003 "is_configured": true, 00:12:59.003 "data_offset": 2048, 00:12:59.003 "data_size": 63488 00:12:59.003 }, 00:12:59.003 { 00:12:59.003 "name": "pt2", 00:12:59.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.003 "is_configured": true, 00:12:59.003 "data_offset": 2048, 00:12:59.003 "data_size": 63488 00:12:59.003 }, 00:12:59.003 { 00:12:59.003 "name": "pt3", 00:12:59.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.003 "is_configured": true, 00:12:59.003 "data_offset": 2048, 00:12:59.003 "data_size": 63488 00:12:59.003 } 00:12:59.003 ] 00:12:59.003 }' 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.003 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.262 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:59.262 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:59.262 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:59.262 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:59.262 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:59.262 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:59.262 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.262 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:59.262 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.262 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.262 [2024-11-20 07:09:56.565014] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:59.521 "name": "raid_bdev1", 00:12:59.521 "aliases": [ 00:12:59.521 "36a225d2-0710-4e08-9d3c-03ec969e93ce" 00:12:59.521 ], 00:12:59.521 "product_name": "Raid Volume", 00:12:59.521 "block_size": 512, 00:12:59.521 "num_blocks": 190464, 00:12:59.521 "uuid": "36a225d2-0710-4e08-9d3c-03ec969e93ce", 00:12:59.521 "assigned_rate_limits": { 00:12:59.521 "rw_ios_per_sec": 0, 00:12:59.521 "rw_mbytes_per_sec": 0, 00:12:59.521 "r_mbytes_per_sec": 0, 00:12:59.521 "w_mbytes_per_sec": 0 00:12:59.521 }, 00:12:59.521 "claimed": false, 00:12:59.521 "zoned": false, 00:12:59.521 "supported_io_types": { 00:12:59.521 "read": true, 00:12:59.521 "write": true, 00:12:59.521 "unmap": true, 00:12:59.521 "flush": true, 00:12:59.521 "reset": true, 00:12:59.521 "nvme_admin": false, 00:12:59.521 "nvme_io": false, 00:12:59.521 "nvme_io_md": false, 00:12:59.521 "write_zeroes": true, 00:12:59.521 "zcopy": false, 00:12:59.521 "get_zone_info": false, 00:12:59.521 "zone_management": false, 00:12:59.521 "zone_append": false, 00:12:59.521 "compare": false, 00:12:59.521 "compare_and_write": false, 00:12:59.521 "abort": false, 00:12:59.521 "seek_hole": false, 00:12:59.521 "seek_data": false, 00:12:59.521 "copy": false, 00:12:59.521 "nvme_iov_md": false 00:12:59.521 }, 00:12:59.521 "memory_domains": [ 00:12:59.521 { 00:12:59.521 "dma_device_id": "system", 00:12:59.521 "dma_device_type": 1 00:12:59.521 }, 00:12:59.521 { 00:12:59.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.521 "dma_device_type": 2 00:12:59.521 }, 00:12:59.521 { 00:12:59.521 "dma_device_id": "system", 00:12:59.521 "dma_device_type": 1 00:12:59.521 }, 00:12:59.521 { 00:12:59.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.521 "dma_device_type": 2 00:12:59.521 }, 00:12:59.521 { 00:12:59.521 "dma_device_id": "system", 00:12:59.521 "dma_device_type": 1 00:12:59.521 }, 00:12:59.521 { 00:12:59.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.521 "dma_device_type": 2 00:12:59.521 } 00:12:59.521 ], 00:12:59.521 "driver_specific": { 00:12:59.521 "raid": { 00:12:59.521 "uuid": "36a225d2-0710-4e08-9d3c-03ec969e93ce", 00:12:59.521 "strip_size_kb": 64, 00:12:59.521 "state": "online", 00:12:59.521 "raid_level": "concat", 00:12:59.521 "superblock": true, 00:12:59.521 "num_base_bdevs": 3, 00:12:59.521 "num_base_bdevs_discovered": 3, 00:12:59.521 "num_base_bdevs_operational": 3, 00:12:59.521 "base_bdevs_list": [ 00:12:59.521 { 00:12:59.521 "name": "pt1", 00:12:59.521 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.521 "is_configured": true, 00:12:59.521 "data_offset": 2048, 00:12:59.521 "data_size": 63488 00:12:59.521 }, 00:12:59.521 { 00:12:59.521 "name": "pt2", 00:12:59.521 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.521 "is_configured": true, 00:12:59.521 "data_offset": 2048, 00:12:59.521 "data_size": 63488 00:12:59.521 }, 00:12:59.521 { 00:12:59.521 "name": "pt3", 00:12:59.521 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.521 "is_configured": true, 00:12:59.521 "data_offset": 2048, 00:12:59.521 "data_size": 63488 00:12:59.521 } 00:12:59.521 ] 00:12:59.521 } 00:12:59.521 } 00:12:59.521 }' 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:59.521 pt2 00:12:59.521 pt3' 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.521 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.781 [2024-11-20 07:09:56.873068] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=36a225d2-0710-4e08-9d3c-03ec969e93ce 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 36a225d2-0710-4e08-9d3c-03ec969e93ce ']' 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.781 [2024-11-20 07:09:56.924717] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:59.781 [2024-11-20 07:09:56.924913] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.781 [2024-11-20 07:09:56.925037] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.781 [2024-11-20 07:09:56.925122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:59.781 [2024-11-20 07:09:56.925138] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.781 07:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.781 [2024-11-20 07:09:57.056792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:59.781 [2024-11-20 07:09:57.059388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:59.781 [2024-11-20 07:09:57.059590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:59.781 [2024-11-20 07:09:57.059686] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:59.781 [2024-11-20 07:09:57.059758] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:59.781 [2024-11-20 07:09:57.059791] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:59.781 [2024-11-20 07:09:57.059817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:59.781 [2024-11-20 07:09:57.059831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:59.781 request: 00:12:59.781 { 00:12:59.781 "name": "raid_bdev1", 00:12:59.781 "raid_level": "concat", 00:12:59.781 "base_bdevs": [ 00:12:59.781 "malloc1", 00:12:59.781 "malloc2", 00:12:59.781 "malloc3" 00:12:59.781 ], 00:12:59.781 "strip_size_kb": 64, 00:12:59.781 "superblock": false, 00:12:59.781 "method": "bdev_raid_create", 00:12:59.781 "req_id": 1 00:12:59.781 } 00:12:59.781 Got JSON-RPC error response 00:12:59.781 response: 00:12:59.781 { 00:12:59.781 "code": -17, 00:12:59.781 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:59.781 } 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.781 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.040 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:00.040 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:00.040 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:00.040 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.040 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.040 [2024-11-20 07:09:57.120739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:00.040 [2024-11-20 07:09:57.120938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.040 [2024-11-20 07:09:57.121020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:00.040 [2024-11-20 07:09:57.121127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.040 [2024-11-20 07:09:57.124019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.040 [2024-11-20 07:09:57.124064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:00.040 [2024-11-20 07:09:57.124168] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:00.040 [2024-11-20 07:09:57.124236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:00.040 pt1 00:13:00.040 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.040 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:00.040 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.040 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.040 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.040 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.041 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.041 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.041 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.041 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.041 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.041 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.041 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.041 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.041 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.041 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.041 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.041 "name": "raid_bdev1", 00:13:00.041 "uuid": "36a225d2-0710-4e08-9d3c-03ec969e93ce", 00:13:00.041 "strip_size_kb": 64, 00:13:00.041 "state": "configuring", 00:13:00.041 "raid_level": "concat", 00:13:00.041 "superblock": true, 00:13:00.041 "num_base_bdevs": 3, 00:13:00.041 "num_base_bdevs_discovered": 1, 00:13:00.041 "num_base_bdevs_operational": 3, 00:13:00.041 "base_bdevs_list": [ 00:13:00.041 { 00:13:00.041 "name": "pt1", 00:13:00.041 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:00.041 "is_configured": true, 00:13:00.041 "data_offset": 2048, 00:13:00.041 "data_size": 63488 00:13:00.041 }, 00:13:00.041 { 00:13:00.041 "name": null, 00:13:00.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.041 "is_configured": false, 00:13:00.041 "data_offset": 2048, 00:13:00.041 "data_size": 63488 00:13:00.041 }, 00:13:00.041 { 00:13:00.041 "name": null, 00:13:00.041 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.041 "is_configured": false, 00:13:00.041 "data_offset": 2048, 00:13:00.041 "data_size": 63488 00:13:00.041 } 00:13:00.041 ] 00:13:00.041 }' 00:13:00.041 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.041 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.308 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:00.308 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:00.308 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.308 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.573 [2024-11-20 07:09:57.628919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:00.573 [2024-11-20 07:09:57.629123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.573 [2024-11-20 07:09:57.629169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:00.573 [2024-11-20 07:09:57.629185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.573 [2024-11-20 07:09:57.629739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.573 [2024-11-20 07:09:57.629771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:00.573 [2024-11-20 07:09:57.629895] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:00.573 [2024-11-20 07:09:57.629927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:00.573 pt2 00:13:00.573 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.573 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:00.573 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.574 [2024-11-20 07:09:57.636937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.574 "name": "raid_bdev1", 00:13:00.574 "uuid": "36a225d2-0710-4e08-9d3c-03ec969e93ce", 00:13:00.574 "strip_size_kb": 64, 00:13:00.574 "state": "configuring", 00:13:00.574 "raid_level": "concat", 00:13:00.574 "superblock": true, 00:13:00.574 "num_base_bdevs": 3, 00:13:00.574 "num_base_bdevs_discovered": 1, 00:13:00.574 "num_base_bdevs_operational": 3, 00:13:00.574 "base_bdevs_list": [ 00:13:00.574 { 00:13:00.574 "name": "pt1", 00:13:00.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:00.574 "is_configured": true, 00:13:00.574 "data_offset": 2048, 00:13:00.574 "data_size": 63488 00:13:00.574 }, 00:13:00.574 { 00:13:00.574 "name": null, 00:13:00.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.574 "is_configured": false, 00:13:00.574 "data_offset": 0, 00:13:00.574 "data_size": 63488 00:13:00.574 }, 00:13:00.574 { 00:13:00.574 "name": null, 00:13:00.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.574 "is_configured": false, 00:13:00.574 "data_offset": 2048, 00:13:00.574 "data_size": 63488 00:13:00.574 } 00:13:00.574 ] 00:13:00.574 }' 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.574 07:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.142 [2024-11-20 07:09:58.177015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:01.142 [2024-11-20 07:09:58.177246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.142 [2024-11-20 07:09:58.177316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:01.142 [2024-11-20 07:09:58.177443] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.142 [2024-11-20 07:09:58.178040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.142 [2024-11-20 07:09:58.178072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:01.142 [2024-11-20 07:09:58.178177] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:01.142 [2024-11-20 07:09:58.178223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:01.142 pt2 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.142 [2024-11-20 07:09:58.184987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:01.142 [2024-11-20 07:09:58.185047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.142 [2024-11-20 07:09:58.185068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:01.142 [2024-11-20 07:09:58.185087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.142 [2024-11-20 07:09:58.185518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.142 [2024-11-20 07:09:58.185565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:01.142 [2024-11-20 07:09:58.185639] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:01.142 [2024-11-20 07:09:58.185671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:01.142 [2024-11-20 07:09:58.185812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:01.142 [2024-11-20 07:09:58.185832] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:01.142 [2024-11-20 07:09:58.186158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:01.142 [2024-11-20 07:09:58.186344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:01.142 [2024-11-20 07:09:58.186358] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:01.142 [2024-11-20 07:09:58.186517] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.142 pt3 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.142 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.142 "name": "raid_bdev1", 00:13:01.142 "uuid": "36a225d2-0710-4e08-9d3c-03ec969e93ce", 00:13:01.142 "strip_size_kb": 64, 00:13:01.142 "state": "online", 00:13:01.142 "raid_level": "concat", 00:13:01.142 "superblock": true, 00:13:01.142 "num_base_bdevs": 3, 00:13:01.143 "num_base_bdevs_discovered": 3, 00:13:01.143 "num_base_bdevs_operational": 3, 00:13:01.143 "base_bdevs_list": [ 00:13:01.143 { 00:13:01.143 "name": "pt1", 00:13:01.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:01.143 "is_configured": true, 00:13:01.143 "data_offset": 2048, 00:13:01.143 "data_size": 63488 00:13:01.143 }, 00:13:01.143 { 00:13:01.143 "name": "pt2", 00:13:01.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.143 "is_configured": true, 00:13:01.143 "data_offset": 2048, 00:13:01.143 "data_size": 63488 00:13:01.143 }, 00:13:01.143 { 00:13:01.143 "name": "pt3", 00:13:01.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.143 "is_configured": true, 00:13:01.143 "data_offset": 2048, 00:13:01.143 "data_size": 63488 00:13:01.143 } 00:13:01.143 ] 00:13:01.143 }' 00:13:01.143 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.143 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.718 [2024-11-20 07:09:58.733585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:01.718 "name": "raid_bdev1", 00:13:01.718 "aliases": [ 00:13:01.718 "36a225d2-0710-4e08-9d3c-03ec969e93ce" 00:13:01.718 ], 00:13:01.718 "product_name": "Raid Volume", 00:13:01.718 "block_size": 512, 00:13:01.718 "num_blocks": 190464, 00:13:01.718 "uuid": "36a225d2-0710-4e08-9d3c-03ec969e93ce", 00:13:01.718 "assigned_rate_limits": { 00:13:01.718 "rw_ios_per_sec": 0, 00:13:01.718 "rw_mbytes_per_sec": 0, 00:13:01.718 "r_mbytes_per_sec": 0, 00:13:01.718 "w_mbytes_per_sec": 0 00:13:01.718 }, 00:13:01.718 "claimed": false, 00:13:01.718 "zoned": false, 00:13:01.718 "supported_io_types": { 00:13:01.718 "read": true, 00:13:01.718 "write": true, 00:13:01.718 "unmap": true, 00:13:01.718 "flush": true, 00:13:01.718 "reset": true, 00:13:01.718 "nvme_admin": false, 00:13:01.718 "nvme_io": false, 00:13:01.718 "nvme_io_md": false, 00:13:01.718 "write_zeroes": true, 00:13:01.718 "zcopy": false, 00:13:01.718 "get_zone_info": false, 00:13:01.718 "zone_management": false, 00:13:01.718 "zone_append": false, 00:13:01.718 "compare": false, 00:13:01.718 "compare_and_write": false, 00:13:01.718 "abort": false, 00:13:01.718 "seek_hole": false, 00:13:01.718 "seek_data": false, 00:13:01.718 "copy": false, 00:13:01.718 "nvme_iov_md": false 00:13:01.718 }, 00:13:01.718 "memory_domains": [ 00:13:01.718 { 00:13:01.718 "dma_device_id": "system", 00:13:01.718 "dma_device_type": 1 00:13:01.718 }, 00:13:01.718 { 00:13:01.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.718 "dma_device_type": 2 00:13:01.718 }, 00:13:01.718 { 00:13:01.718 "dma_device_id": "system", 00:13:01.718 "dma_device_type": 1 00:13:01.718 }, 00:13:01.718 { 00:13:01.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.718 "dma_device_type": 2 00:13:01.718 }, 00:13:01.718 { 00:13:01.718 "dma_device_id": "system", 00:13:01.718 "dma_device_type": 1 00:13:01.718 }, 00:13:01.718 { 00:13:01.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.718 "dma_device_type": 2 00:13:01.718 } 00:13:01.718 ], 00:13:01.718 "driver_specific": { 00:13:01.718 "raid": { 00:13:01.718 "uuid": "36a225d2-0710-4e08-9d3c-03ec969e93ce", 00:13:01.718 "strip_size_kb": 64, 00:13:01.718 "state": "online", 00:13:01.718 "raid_level": "concat", 00:13:01.718 "superblock": true, 00:13:01.718 "num_base_bdevs": 3, 00:13:01.718 "num_base_bdevs_discovered": 3, 00:13:01.718 "num_base_bdevs_operational": 3, 00:13:01.718 "base_bdevs_list": [ 00:13:01.718 { 00:13:01.718 "name": "pt1", 00:13:01.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:01.718 "is_configured": true, 00:13:01.718 "data_offset": 2048, 00:13:01.718 "data_size": 63488 00:13:01.718 }, 00:13:01.718 { 00:13:01.718 "name": "pt2", 00:13:01.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.718 "is_configured": true, 00:13:01.718 "data_offset": 2048, 00:13:01.718 "data_size": 63488 00:13:01.718 }, 00:13:01.718 { 00:13:01.718 "name": "pt3", 00:13:01.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.718 "is_configured": true, 00:13:01.718 "data_offset": 2048, 00:13:01.718 "data_size": 63488 00:13:01.718 } 00:13:01.718 ] 00:13:01.718 } 00:13:01.718 } 00:13:01.718 }' 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:01.718 pt2 00:13:01.718 pt3' 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.718 07:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.718 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.718 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.718 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:01.718 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:01.718 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.718 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.718 [2024-11-20 07:09:59.021636] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 36a225d2-0710-4e08-9d3c-03ec969e93ce '!=' 36a225d2-0710-4e08-9d3c-03ec969e93ce ']' 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66821 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66821 ']' 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66821 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66821 00:13:01.978 killing process with pid 66821 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66821' 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66821 00:13:01.978 [2024-11-20 07:09:59.091159] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:01.978 07:09:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66821 00:13:01.978 [2024-11-20 07:09:59.091279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.978 [2024-11-20 07:09:59.091360] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.978 [2024-11-20 07:09:59.091379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:02.237 [2024-11-20 07:09:59.362440] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:03.172 07:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:03.172 00:13:03.172 real 0m5.626s 00:13:03.172 user 0m8.465s 00:13:03.172 sys 0m0.835s 00:13:03.173 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.173 ************************************ 00:13:03.173 END TEST raid_superblock_test 00:13:03.173 ************************************ 00:13:03.173 07:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.173 07:10:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:13:03.173 07:10:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:03.173 07:10:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.173 07:10:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.173 ************************************ 00:13:03.173 START TEST raid_read_error_test 00:13:03.173 ************************************ 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5U5rjKhwzl 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67074 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67074 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67074 ']' 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.173 07:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.431 [2024-11-20 07:10:00.562264] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:13:03.431 [2024-11-20 07:10:00.562457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67074 ] 00:13:03.431 [2024-11-20 07:10:00.746418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.689 [2024-11-20 07:10:00.932752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.948 [2024-11-20 07:10:01.134394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.948 [2024-11-20 07:10:01.134705] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.516 BaseBdev1_malloc 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.516 true 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.516 [2024-11-20 07:10:01.584471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:04.516 [2024-11-20 07:10:01.584677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.516 [2024-11-20 07:10:01.584761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:04.516 [2024-11-20 07:10:01.584787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.516 [2024-11-20 07:10:01.587640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.516 [2024-11-20 07:10:01.587710] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:04.516 BaseBdev1 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.516 BaseBdev2_malloc 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.516 true 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.516 [2024-11-20 07:10:01.641856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:04.516 [2024-11-20 07:10:01.642091] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.516 [2024-11-20 07:10:01.642163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:04.516 [2024-11-20 07:10:01.642313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.516 [2024-11-20 07:10:01.645265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.516 [2024-11-20 07:10:01.645318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:04.516 BaseBdev2 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.516 BaseBdev3_malloc 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.516 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.516 true 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.517 [2024-11-20 07:10:01.712967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:04.517 [2024-11-20 07:10:01.713039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.517 [2024-11-20 07:10:01.713069] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:04.517 [2024-11-20 07:10:01.713086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.517 [2024-11-20 07:10:01.715931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.517 [2024-11-20 07:10:01.715982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:04.517 BaseBdev3 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.517 [2024-11-20 07:10:01.725050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.517 [2024-11-20 07:10:01.727659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:04.517 [2024-11-20 07:10:01.727804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:04.517 [2024-11-20 07:10:01.728179] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:04.517 [2024-11-20 07:10:01.728213] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:04.517 [2024-11-20 07:10:01.728621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:04.517 [2024-11-20 07:10:01.728883] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:04.517 [2024-11-20 07:10:01.728922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:04.517 [2024-11-20 07:10:01.729186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.517 "name": "raid_bdev1", 00:13:04.517 "uuid": "73d0d2be-e1d2-4453-b8be-b5101c9ac323", 00:13:04.517 "strip_size_kb": 64, 00:13:04.517 "state": "online", 00:13:04.517 "raid_level": "concat", 00:13:04.517 "superblock": true, 00:13:04.517 "num_base_bdevs": 3, 00:13:04.517 "num_base_bdevs_discovered": 3, 00:13:04.517 "num_base_bdevs_operational": 3, 00:13:04.517 "base_bdevs_list": [ 00:13:04.517 { 00:13:04.517 "name": "BaseBdev1", 00:13:04.517 "uuid": "8edc4782-98e0-5431-a87c-6e935dcd29b0", 00:13:04.517 "is_configured": true, 00:13:04.517 "data_offset": 2048, 00:13:04.517 "data_size": 63488 00:13:04.517 }, 00:13:04.517 { 00:13:04.517 "name": "BaseBdev2", 00:13:04.517 "uuid": "0268200d-63c7-51af-b856-5d543ef11133", 00:13:04.517 "is_configured": true, 00:13:04.517 "data_offset": 2048, 00:13:04.517 "data_size": 63488 00:13:04.517 }, 00:13:04.517 { 00:13:04.517 "name": "BaseBdev3", 00:13:04.517 "uuid": "a424d020-6df4-58ba-9dd9-df9e14f14c24", 00:13:04.517 "is_configured": true, 00:13:04.517 "data_offset": 2048, 00:13:04.517 "data_size": 63488 00:13:04.517 } 00:13:04.517 ] 00:13:04.517 }' 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.517 07:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.083 07:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:05.083 07:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:05.083 [2024-11-20 07:10:02.386692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.053 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.053 "name": "raid_bdev1", 00:13:06.053 "uuid": "73d0d2be-e1d2-4453-b8be-b5101c9ac323", 00:13:06.053 "strip_size_kb": 64, 00:13:06.053 "state": "online", 00:13:06.053 "raid_level": "concat", 00:13:06.053 "superblock": true, 00:13:06.053 "num_base_bdevs": 3, 00:13:06.053 "num_base_bdevs_discovered": 3, 00:13:06.053 "num_base_bdevs_operational": 3, 00:13:06.053 "base_bdevs_list": [ 00:13:06.053 { 00:13:06.053 "name": "BaseBdev1", 00:13:06.053 "uuid": "8edc4782-98e0-5431-a87c-6e935dcd29b0", 00:13:06.053 "is_configured": true, 00:13:06.053 "data_offset": 2048, 00:13:06.053 "data_size": 63488 00:13:06.053 }, 00:13:06.053 { 00:13:06.053 "name": "BaseBdev2", 00:13:06.053 "uuid": "0268200d-63c7-51af-b856-5d543ef11133", 00:13:06.053 "is_configured": true, 00:13:06.053 "data_offset": 2048, 00:13:06.053 "data_size": 63488 00:13:06.053 }, 00:13:06.053 { 00:13:06.053 "name": "BaseBdev3", 00:13:06.053 "uuid": "a424d020-6df4-58ba-9dd9-df9e14f14c24", 00:13:06.053 "is_configured": true, 00:13:06.053 "data_offset": 2048, 00:13:06.053 "data_size": 63488 00:13:06.053 } 00:13:06.053 ] 00:13:06.053 }' 00:13:06.054 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.054 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.622 [2024-11-20 07:10:03.794169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:06.622 [2024-11-20 07:10:03.794223] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:06.622 [2024-11-20 07:10:03.797799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.622 [2024-11-20 07:10:03.797883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.622 [2024-11-20 07:10:03.797940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.622 [2024-11-20 07:10:03.797958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:06.622 { 00:13:06.622 "results": [ 00:13:06.622 { 00:13:06.622 "job": "raid_bdev1", 00:13:06.622 "core_mask": "0x1", 00:13:06.622 "workload": "randrw", 00:13:06.622 "percentage": 50, 00:13:06.622 "status": "finished", 00:13:06.622 "queue_depth": 1, 00:13:06.622 "io_size": 131072, 00:13:06.622 "runtime": 1.404938, 00:13:06.622 "iops": 10222.51515725249, 00:13:06.622 "mibps": 1277.8143946565613, 00:13:06.622 "io_failed": 1, 00:13:06.622 "io_timeout": 0, 00:13:06.622 "avg_latency_us": 136.70249795877032, 00:13:06.622 "min_latency_us": 43.054545454545455, 00:13:06.622 "max_latency_us": 2278.8654545454547 00:13:06.622 } 00:13:06.622 ], 00:13:06.622 "core_count": 1 00:13:06.622 } 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67074 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67074 ']' 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67074 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67074 00:13:06.622 killing process with pid 67074 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67074' 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67074 00:13:06.622 [2024-11-20 07:10:03.834247] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.622 07:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67074 00:13:06.879 [2024-11-20 07:10:04.049036] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.254 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5U5rjKhwzl 00:13:08.254 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:08.254 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:08.254 ************************************ 00:13:08.254 END TEST raid_read_error_test 00:13:08.254 ************************************ 00:13:08.254 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:08.254 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:08.254 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:08.254 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:08.254 07:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:08.254 00:13:08.254 real 0m4.733s 00:13:08.254 user 0m5.887s 00:13:08.254 sys 0m0.579s 00:13:08.254 07:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.254 07:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.254 07:10:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:13:08.254 07:10:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:08.254 07:10:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.254 07:10:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.254 ************************************ 00:13:08.254 START TEST raid_write_error_test 00:13:08.254 ************************************ 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4x5sOH0NhT 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67225 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67225 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67225 ']' 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.254 07:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.254 [2024-11-20 07:10:05.357660] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:13:08.254 [2024-11-20 07:10:05.357848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67225 ] 00:13:08.255 [2024-11-20 07:10:05.551725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.512 [2024-11-20 07:10:05.711069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.771 [2024-11-20 07:10:05.961797] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.771 [2024-11-20 07:10:05.961859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.341 BaseBdev1_malloc 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.341 true 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.341 [2024-11-20 07:10:06.422730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:09.341 [2024-11-20 07:10:06.422943] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.341 [2024-11-20 07:10:06.423025] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:09.341 [2024-11-20 07:10:06.423276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.341 [2024-11-20 07:10:06.426171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.341 [2024-11-20 07:10:06.426224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.341 BaseBdev1 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.341 BaseBdev2_malloc 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.341 true 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.341 [2024-11-20 07:10:06.488056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:09.341 [2024-11-20 07:10:06.488251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.341 [2024-11-20 07:10:06.488287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:09.341 [2024-11-20 07:10:06.488306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.341 [2024-11-20 07:10:06.491080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.341 [2024-11-20 07:10:06.491129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:09.341 BaseBdev2 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.341 BaseBdev3_malloc 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.341 true 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.341 [2024-11-20 07:10:06.559359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:09.341 [2024-11-20 07:10:06.559548] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.341 [2024-11-20 07:10:06.559618] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:09.341 [2024-11-20 07:10:06.559859] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.341 [2024-11-20 07:10:06.562666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.341 [2024-11-20 07:10:06.562828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:09.341 BaseBdev3 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.341 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.341 [2024-11-20 07:10:06.571607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.341 [2024-11-20 07:10:06.574213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.341 [2024-11-20 07:10:06.574442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.341 [2024-11-20 07:10:06.574843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:09.341 [2024-11-20 07:10:06.575001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:09.341 [2024-11-20 07:10:06.575379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:09.341 [2024-11-20 07:10:06.575737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:09.342 [2024-11-20 07:10:06.575905] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:09.342 [2024-11-20 07:10:06.576270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.342 "name": "raid_bdev1", 00:13:09.342 "uuid": "d51e504c-a42f-4f78-b11b-88e2774a41c7", 00:13:09.342 "strip_size_kb": 64, 00:13:09.342 "state": "online", 00:13:09.342 "raid_level": "concat", 00:13:09.342 "superblock": true, 00:13:09.342 "num_base_bdevs": 3, 00:13:09.342 "num_base_bdevs_discovered": 3, 00:13:09.342 "num_base_bdevs_operational": 3, 00:13:09.342 "base_bdevs_list": [ 00:13:09.342 { 00:13:09.342 "name": "BaseBdev1", 00:13:09.342 "uuid": "86c2fd1c-cfbc-5bb2-a907-367557dd235e", 00:13:09.342 "is_configured": true, 00:13:09.342 "data_offset": 2048, 00:13:09.342 "data_size": 63488 00:13:09.342 }, 00:13:09.342 { 00:13:09.342 "name": "BaseBdev2", 00:13:09.342 "uuid": "bcf3164b-8712-5f6b-b48b-38e079c34bd9", 00:13:09.342 "is_configured": true, 00:13:09.342 "data_offset": 2048, 00:13:09.342 "data_size": 63488 00:13:09.342 }, 00:13:09.342 { 00:13:09.342 "name": "BaseBdev3", 00:13:09.342 "uuid": "20368092-af2b-5f89-b2bb-1e9fc65c41b0", 00:13:09.342 "is_configured": true, 00:13:09.342 "data_offset": 2048, 00:13:09.342 "data_size": 63488 00:13:09.342 } 00:13:09.342 ] 00:13:09.342 }' 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.342 07:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.919 07:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:09.919 07:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:10.177 [2024-11-20 07:10:07.249897] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.114 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.115 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.115 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.115 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.115 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.115 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.115 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.115 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.115 "name": "raid_bdev1", 00:13:11.115 "uuid": "d51e504c-a42f-4f78-b11b-88e2774a41c7", 00:13:11.115 "strip_size_kb": 64, 00:13:11.115 "state": "online", 00:13:11.115 "raid_level": "concat", 00:13:11.115 "superblock": true, 00:13:11.115 "num_base_bdevs": 3, 00:13:11.115 "num_base_bdevs_discovered": 3, 00:13:11.115 "num_base_bdevs_operational": 3, 00:13:11.115 "base_bdevs_list": [ 00:13:11.115 { 00:13:11.115 "name": "BaseBdev1", 00:13:11.115 "uuid": "86c2fd1c-cfbc-5bb2-a907-367557dd235e", 00:13:11.115 "is_configured": true, 00:13:11.115 "data_offset": 2048, 00:13:11.115 "data_size": 63488 00:13:11.115 }, 00:13:11.115 { 00:13:11.115 "name": "BaseBdev2", 00:13:11.115 "uuid": "bcf3164b-8712-5f6b-b48b-38e079c34bd9", 00:13:11.115 "is_configured": true, 00:13:11.115 "data_offset": 2048, 00:13:11.115 "data_size": 63488 00:13:11.115 }, 00:13:11.115 { 00:13:11.115 "name": "BaseBdev3", 00:13:11.115 "uuid": "20368092-af2b-5f89-b2bb-1e9fc65c41b0", 00:13:11.115 "is_configured": true, 00:13:11.115 "data_offset": 2048, 00:13:11.115 "data_size": 63488 00:13:11.115 } 00:13:11.115 ] 00:13:11.115 }' 00:13:11.115 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.115 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.374 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.374 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.374 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.374 [2024-11-20 07:10:08.670216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.374 [2024-11-20 07:10:08.670251] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.374 { 00:13:11.374 "results": [ 00:13:11.374 { 00:13:11.374 "job": "raid_bdev1", 00:13:11.374 "core_mask": "0x1", 00:13:11.374 "workload": "randrw", 00:13:11.374 "percentage": 50, 00:13:11.374 "status": "finished", 00:13:11.374 "queue_depth": 1, 00:13:11.374 "io_size": 131072, 00:13:11.374 "runtime": 1.417436, 00:13:11.374 "iops": 10605.769854864699, 00:13:11.374 "mibps": 1325.7212318580873, 00:13:11.374 "io_failed": 1, 00:13:11.374 "io_timeout": 0, 00:13:11.374 "avg_latency_us": 131.67878288001742, 00:13:11.374 "min_latency_us": 36.77090909090909, 00:13:11.374 "max_latency_us": 1884.16 00:13:11.374 } 00:13:11.374 ], 00:13:11.374 "core_count": 1 00:13:11.374 } 00:13:11.374 [2024-11-20 07:10:08.673727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.374 [2024-11-20 07:10:08.673787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.374 [2024-11-20 07:10:08.673846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.374 [2024-11-20 07:10:08.673863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:11.374 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.374 07:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67225 00:13:11.374 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67225 ']' 00:13:11.374 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67225 00:13:11.374 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:11.374 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.374 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67225 00:13:11.632 killing process with pid 67225 00:13:11.632 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.632 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.632 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67225' 00:13:11.632 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67225 00:13:11.632 [2024-11-20 07:10:08.708528] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.632 07:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67225 00:13:11.632 [2024-11-20 07:10:08.929784] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:13.008 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4x5sOH0NhT 00:13:13.008 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:13.008 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:13.008 ************************************ 00:13:13.008 END TEST raid_write_error_test 00:13:13.008 ************************************ 00:13:13.008 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:13.008 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:13.008 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:13.008 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:13.008 07:10:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:13.008 00:13:13.008 real 0m4.824s 00:13:13.008 user 0m6.020s 00:13:13.008 sys 0m0.594s 00:13:13.008 07:10:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.008 07:10:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.008 07:10:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:13.008 07:10:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:13:13.008 07:10:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:13.008 07:10:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.008 07:10:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:13.008 ************************************ 00:13:13.008 START TEST raid_state_function_test 00:13:13.008 ************************************ 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:13.008 Process raid pid: 67369 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67369 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67369' 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67369 00:13:13.008 07:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:13.009 07:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67369 ']' 00:13:13.009 07:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.009 07:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:13.009 07:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.009 07:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:13.009 07:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.009 [2024-11-20 07:10:10.226690] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:13:13.009 [2024-11-20 07:10:10.226890] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.267 [2024-11-20 07:10:10.410249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.267 [2024-11-20 07:10:10.541823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.526 [2024-11-20 07:10:10.749252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:13.526 [2024-11-20 07:10:10.749309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.093 [2024-11-20 07:10:11.254220] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.093 [2024-11-20 07:10:11.254286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.093 [2024-11-20 07:10:11.254304] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.093 [2024-11-20 07:10:11.254321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.093 [2024-11-20 07:10:11.254332] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.093 [2024-11-20 07:10:11.254346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.093 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.093 "name": "Existed_Raid", 00:13:14.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.093 "strip_size_kb": 0, 00:13:14.093 "state": "configuring", 00:13:14.093 "raid_level": "raid1", 00:13:14.093 "superblock": false, 00:13:14.093 "num_base_bdevs": 3, 00:13:14.093 "num_base_bdevs_discovered": 0, 00:13:14.093 "num_base_bdevs_operational": 3, 00:13:14.093 "base_bdevs_list": [ 00:13:14.093 { 00:13:14.093 "name": "BaseBdev1", 00:13:14.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.093 "is_configured": false, 00:13:14.093 "data_offset": 0, 00:13:14.093 "data_size": 0 00:13:14.093 }, 00:13:14.093 { 00:13:14.093 "name": "BaseBdev2", 00:13:14.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.094 "is_configured": false, 00:13:14.094 "data_offset": 0, 00:13:14.094 "data_size": 0 00:13:14.094 }, 00:13:14.094 { 00:13:14.094 "name": "BaseBdev3", 00:13:14.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.094 "is_configured": false, 00:13:14.094 "data_offset": 0, 00:13:14.094 "data_size": 0 00:13:14.094 } 00:13:14.094 ] 00:13:14.094 }' 00:13:14.094 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.094 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.659 [2024-11-20 07:10:11.818346] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:14.659 [2024-11-20 07:10:11.818389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.659 [2024-11-20 07:10:11.826310] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.659 [2024-11-20 07:10:11.826503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.659 [2024-11-20 07:10:11.826659] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.659 [2024-11-20 07:10:11.826726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.659 [2024-11-20 07:10:11.826860] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.659 [2024-11-20 07:10:11.826950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.659 [2024-11-20 07:10:11.872046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.659 BaseBdev1 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.659 [ 00:13:14.659 { 00:13:14.659 "name": "BaseBdev1", 00:13:14.659 "aliases": [ 00:13:14.659 "75c52c5e-744d-4c0d-a1ab-2faa25bed781" 00:13:14.659 ], 00:13:14.659 "product_name": "Malloc disk", 00:13:14.659 "block_size": 512, 00:13:14.659 "num_blocks": 65536, 00:13:14.659 "uuid": "75c52c5e-744d-4c0d-a1ab-2faa25bed781", 00:13:14.659 "assigned_rate_limits": { 00:13:14.659 "rw_ios_per_sec": 0, 00:13:14.659 "rw_mbytes_per_sec": 0, 00:13:14.659 "r_mbytes_per_sec": 0, 00:13:14.659 "w_mbytes_per_sec": 0 00:13:14.659 }, 00:13:14.659 "claimed": true, 00:13:14.659 "claim_type": "exclusive_write", 00:13:14.659 "zoned": false, 00:13:14.659 "supported_io_types": { 00:13:14.659 "read": true, 00:13:14.659 "write": true, 00:13:14.659 "unmap": true, 00:13:14.659 "flush": true, 00:13:14.659 "reset": true, 00:13:14.659 "nvme_admin": false, 00:13:14.659 "nvme_io": false, 00:13:14.659 "nvme_io_md": false, 00:13:14.659 "write_zeroes": true, 00:13:14.659 "zcopy": true, 00:13:14.659 "get_zone_info": false, 00:13:14.659 "zone_management": false, 00:13:14.659 "zone_append": false, 00:13:14.659 "compare": false, 00:13:14.659 "compare_and_write": false, 00:13:14.659 "abort": true, 00:13:14.659 "seek_hole": false, 00:13:14.659 "seek_data": false, 00:13:14.659 "copy": true, 00:13:14.659 "nvme_iov_md": false 00:13:14.659 }, 00:13:14.659 "memory_domains": [ 00:13:14.659 { 00:13:14.659 "dma_device_id": "system", 00:13:14.659 "dma_device_type": 1 00:13:14.659 }, 00:13:14.659 { 00:13:14.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.659 "dma_device_type": 2 00:13:14.659 } 00:13:14.659 ], 00:13:14.659 "driver_specific": {} 00:13:14.659 } 00:13:14.659 ] 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.659 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.660 "name": "Existed_Raid", 00:13:14.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.660 "strip_size_kb": 0, 00:13:14.660 "state": "configuring", 00:13:14.660 "raid_level": "raid1", 00:13:14.660 "superblock": false, 00:13:14.660 "num_base_bdevs": 3, 00:13:14.660 "num_base_bdevs_discovered": 1, 00:13:14.660 "num_base_bdevs_operational": 3, 00:13:14.660 "base_bdevs_list": [ 00:13:14.660 { 00:13:14.660 "name": "BaseBdev1", 00:13:14.660 "uuid": "75c52c5e-744d-4c0d-a1ab-2faa25bed781", 00:13:14.660 "is_configured": true, 00:13:14.660 "data_offset": 0, 00:13:14.660 "data_size": 65536 00:13:14.660 }, 00:13:14.660 { 00:13:14.660 "name": "BaseBdev2", 00:13:14.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.660 "is_configured": false, 00:13:14.660 "data_offset": 0, 00:13:14.660 "data_size": 0 00:13:14.660 }, 00:13:14.660 { 00:13:14.660 "name": "BaseBdev3", 00:13:14.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.660 "is_configured": false, 00:13:14.660 "data_offset": 0, 00:13:14.660 "data_size": 0 00:13:14.660 } 00:13:14.660 ] 00:13:14.660 }' 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.660 07:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.229 [2024-11-20 07:10:12.436263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:15.229 [2024-11-20 07:10:12.436486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.229 [2024-11-20 07:10:12.444289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.229 [2024-11-20 07:10:12.446728] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:15.229 [2024-11-20 07:10:12.446779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:15.229 [2024-11-20 07:10:12.446796] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:15.229 [2024-11-20 07:10:12.446811] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.229 "name": "Existed_Raid", 00:13:15.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.229 "strip_size_kb": 0, 00:13:15.229 "state": "configuring", 00:13:15.229 "raid_level": "raid1", 00:13:15.229 "superblock": false, 00:13:15.229 "num_base_bdevs": 3, 00:13:15.229 "num_base_bdevs_discovered": 1, 00:13:15.229 "num_base_bdevs_operational": 3, 00:13:15.229 "base_bdevs_list": [ 00:13:15.229 { 00:13:15.229 "name": "BaseBdev1", 00:13:15.229 "uuid": "75c52c5e-744d-4c0d-a1ab-2faa25bed781", 00:13:15.229 "is_configured": true, 00:13:15.229 "data_offset": 0, 00:13:15.229 "data_size": 65536 00:13:15.229 }, 00:13:15.229 { 00:13:15.229 "name": "BaseBdev2", 00:13:15.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.229 "is_configured": false, 00:13:15.229 "data_offset": 0, 00:13:15.229 "data_size": 0 00:13:15.229 }, 00:13:15.229 { 00:13:15.229 "name": "BaseBdev3", 00:13:15.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.229 "is_configured": false, 00:13:15.229 "data_offset": 0, 00:13:15.229 "data_size": 0 00:13:15.229 } 00:13:15.229 ] 00:13:15.229 }' 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.229 07:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.797 07:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:15.797 07:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.797 07:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.797 [2024-11-20 07:10:13.007793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.797 BaseBdev2 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.797 [ 00:13:15.797 { 00:13:15.797 "name": "BaseBdev2", 00:13:15.797 "aliases": [ 00:13:15.797 "6fa313aa-59f8-4ad8-9966-caa18ccc69a6" 00:13:15.797 ], 00:13:15.797 "product_name": "Malloc disk", 00:13:15.797 "block_size": 512, 00:13:15.797 "num_blocks": 65536, 00:13:15.797 "uuid": "6fa313aa-59f8-4ad8-9966-caa18ccc69a6", 00:13:15.797 "assigned_rate_limits": { 00:13:15.797 "rw_ios_per_sec": 0, 00:13:15.797 "rw_mbytes_per_sec": 0, 00:13:15.797 "r_mbytes_per_sec": 0, 00:13:15.797 "w_mbytes_per_sec": 0 00:13:15.797 }, 00:13:15.797 "claimed": true, 00:13:15.797 "claim_type": "exclusive_write", 00:13:15.797 "zoned": false, 00:13:15.797 "supported_io_types": { 00:13:15.797 "read": true, 00:13:15.797 "write": true, 00:13:15.797 "unmap": true, 00:13:15.797 "flush": true, 00:13:15.797 "reset": true, 00:13:15.797 "nvme_admin": false, 00:13:15.797 "nvme_io": false, 00:13:15.797 "nvme_io_md": false, 00:13:15.797 "write_zeroes": true, 00:13:15.797 "zcopy": true, 00:13:15.797 "get_zone_info": false, 00:13:15.797 "zone_management": false, 00:13:15.797 "zone_append": false, 00:13:15.797 "compare": false, 00:13:15.797 "compare_and_write": false, 00:13:15.797 "abort": true, 00:13:15.797 "seek_hole": false, 00:13:15.797 "seek_data": false, 00:13:15.797 "copy": true, 00:13:15.797 "nvme_iov_md": false 00:13:15.797 }, 00:13:15.797 "memory_domains": [ 00:13:15.797 { 00:13:15.797 "dma_device_id": "system", 00:13:15.797 "dma_device_type": 1 00:13:15.797 }, 00:13:15.797 { 00:13:15.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:15.797 "dma_device_type": 2 00:13:15.797 } 00:13:15.797 ], 00:13:15.797 "driver_specific": {} 00:13:15.797 } 00:13:15.797 ] 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.797 "name": "Existed_Raid", 00:13:15.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.797 "strip_size_kb": 0, 00:13:15.797 "state": "configuring", 00:13:15.797 "raid_level": "raid1", 00:13:15.797 "superblock": false, 00:13:15.797 "num_base_bdevs": 3, 00:13:15.797 "num_base_bdevs_discovered": 2, 00:13:15.797 "num_base_bdevs_operational": 3, 00:13:15.797 "base_bdevs_list": [ 00:13:15.797 { 00:13:15.797 "name": "BaseBdev1", 00:13:15.797 "uuid": "75c52c5e-744d-4c0d-a1ab-2faa25bed781", 00:13:15.797 "is_configured": true, 00:13:15.797 "data_offset": 0, 00:13:15.797 "data_size": 65536 00:13:15.797 }, 00:13:15.797 { 00:13:15.797 "name": "BaseBdev2", 00:13:15.797 "uuid": "6fa313aa-59f8-4ad8-9966-caa18ccc69a6", 00:13:15.797 "is_configured": true, 00:13:15.797 "data_offset": 0, 00:13:15.797 "data_size": 65536 00:13:15.797 }, 00:13:15.797 { 00:13:15.797 "name": "BaseBdev3", 00:13:15.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.797 "is_configured": false, 00:13:15.797 "data_offset": 0, 00:13:15.797 "data_size": 0 00:13:15.797 } 00:13:15.797 ] 00:13:15.797 }' 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.797 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.365 [2024-11-20 07:10:13.621774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:16.365 [2024-11-20 07:10:13.622033] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:16.365 [2024-11-20 07:10:13.622067] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:16.365 [2024-11-20 07:10:13.622437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:16.365 [2024-11-20 07:10:13.622660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:16.365 [2024-11-20 07:10:13.622677] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:16.365 [2024-11-20 07:10:13.623029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.365 BaseBdev3 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.365 [ 00:13:16.365 { 00:13:16.365 "name": "BaseBdev3", 00:13:16.365 "aliases": [ 00:13:16.365 "a8486144-653f-4dbe-9dbb-725e4425ec77" 00:13:16.365 ], 00:13:16.365 "product_name": "Malloc disk", 00:13:16.365 "block_size": 512, 00:13:16.365 "num_blocks": 65536, 00:13:16.365 "uuid": "a8486144-653f-4dbe-9dbb-725e4425ec77", 00:13:16.365 "assigned_rate_limits": { 00:13:16.365 "rw_ios_per_sec": 0, 00:13:16.365 "rw_mbytes_per_sec": 0, 00:13:16.365 "r_mbytes_per_sec": 0, 00:13:16.365 "w_mbytes_per_sec": 0 00:13:16.365 }, 00:13:16.365 "claimed": true, 00:13:16.365 "claim_type": "exclusive_write", 00:13:16.365 "zoned": false, 00:13:16.365 "supported_io_types": { 00:13:16.365 "read": true, 00:13:16.365 "write": true, 00:13:16.365 "unmap": true, 00:13:16.365 "flush": true, 00:13:16.365 "reset": true, 00:13:16.365 "nvme_admin": false, 00:13:16.365 "nvme_io": false, 00:13:16.365 "nvme_io_md": false, 00:13:16.365 "write_zeroes": true, 00:13:16.365 "zcopy": true, 00:13:16.365 "get_zone_info": false, 00:13:16.365 "zone_management": false, 00:13:16.365 "zone_append": false, 00:13:16.365 "compare": false, 00:13:16.365 "compare_and_write": false, 00:13:16.365 "abort": true, 00:13:16.365 "seek_hole": false, 00:13:16.365 "seek_data": false, 00:13:16.365 "copy": true, 00:13:16.365 "nvme_iov_md": false 00:13:16.365 }, 00:13:16.365 "memory_domains": [ 00:13:16.365 { 00:13:16.365 "dma_device_id": "system", 00:13:16.365 "dma_device_type": 1 00:13:16.365 }, 00:13:16.365 { 00:13:16.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:16.365 "dma_device_type": 2 00:13:16.365 } 00:13:16.365 ], 00:13:16.365 "driver_specific": {} 00:13:16.365 } 00:13:16.365 ] 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:16.365 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.366 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.628 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.628 "name": "Existed_Raid", 00:13:16.628 "uuid": "de70fcd5-6590-4269-b998-e79b82752df9", 00:13:16.628 "strip_size_kb": 0, 00:13:16.628 "state": "online", 00:13:16.628 "raid_level": "raid1", 00:13:16.628 "superblock": false, 00:13:16.628 "num_base_bdevs": 3, 00:13:16.628 "num_base_bdevs_discovered": 3, 00:13:16.628 "num_base_bdevs_operational": 3, 00:13:16.628 "base_bdevs_list": [ 00:13:16.628 { 00:13:16.628 "name": "BaseBdev1", 00:13:16.628 "uuid": "75c52c5e-744d-4c0d-a1ab-2faa25bed781", 00:13:16.628 "is_configured": true, 00:13:16.628 "data_offset": 0, 00:13:16.628 "data_size": 65536 00:13:16.628 }, 00:13:16.628 { 00:13:16.628 "name": "BaseBdev2", 00:13:16.628 "uuid": "6fa313aa-59f8-4ad8-9966-caa18ccc69a6", 00:13:16.628 "is_configured": true, 00:13:16.628 "data_offset": 0, 00:13:16.628 "data_size": 65536 00:13:16.628 }, 00:13:16.628 { 00:13:16.628 "name": "BaseBdev3", 00:13:16.628 "uuid": "a8486144-653f-4dbe-9dbb-725e4425ec77", 00:13:16.628 "is_configured": true, 00:13:16.628 "data_offset": 0, 00:13:16.628 "data_size": 65536 00:13:16.628 } 00:13:16.628 ] 00:13:16.628 }' 00:13:16.628 07:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.628 07:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.932 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:16.932 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:16.932 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:16.932 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:16.932 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:16.932 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:16.932 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:16.932 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.932 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:16.932 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.932 [2024-11-20 07:10:14.182414] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.932 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.191 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:17.191 "name": "Existed_Raid", 00:13:17.191 "aliases": [ 00:13:17.191 "de70fcd5-6590-4269-b998-e79b82752df9" 00:13:17.191 ], 00:13:17.191 "product_name": "Raid Volume", 00:13:17.191 "block_size": 512, 00:13:17.191 "num_blocks": 65536, 00:13:17.191 "uuid": "de70fcd5-6590-4269-b998-e79b82752df9", 00:13:17.191 "assigned_rate_limits": { 00:13:17.191 "rw_ios_per_sec": 0, 00:13:17.191 "rw_mbytes_per_sec": 0, 00:13:17.191 "r_mbytes_per_sec": 0, 00:13:17.191 "w_mbytes_per_sec": 0 00:13:17.191 }, 00:13:17.191 "claimed": false, 00:13:17.191 "zoned": false, 00:13:17.191 "supported_io_types": { 00:13:17.191 "read": true, 00:13:17.191 "write": true, 00:13:17.191 "unmap": false, 00:13:17.191 "flush": false, 00:13:17.191 "reset": true, 00:13:17.191 "nvme_admin": false, 00:13:17.191 "nvme_io": false, 00:13:17.191 "nvme_io_md": false, 00:13:17.191 "write_zeroes": true, 00:13:17.191 "zcopy": false, 00:13:17.191 "get_zone_info": false, 00:13:17.191 "zone_management": false, 00:13:17.191 "zone_append": false, 00:13:17.191 "compare": false, 00:13:17.191 "compare_and_write": false, 00:13:17.191 "abort": false, 00:13:17.191 "seek_hole": false, 00:13:17.191 "seek_data": false, 00:13:17.191 "copy": false, 00:13:17.191 "nvme_iov_md": false 00:13:17.191 }, 00:13:17.191 "memory_domains": [ 00:13:17.191 { 00:13:17.191 "dma_device_id": "system", 00:13:17.191 "dma_device_type": 1 00:13:17.191 }, 00:13:17.191 { 00:13:17.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.191 "dma_device_type": 2 00:13:17.191 }, 00:13:17.191 { 00:13:17.191 "dma_device_id": "system", 00:13:17.191 "dma_device_type": 1 00:13:17.191 }, 00:13:17.191 { 00:13:17.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.191 "dma_device_type": 2 00:13:17.191 }, 00:13:17.191 { 00:13:17.191 "dma_device_id": "system", 00:13:17.191 "dma_device_type": 1 00:13:17.191 }, 00:13:17.191 { 00:13:17.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.191 "dma_device_type": 2 00:13:17.191 } 00:13:17.191 ], 00:13:17.191 "driver_specific": { 00:13:17.191 "raid": { 00:13:17.191 "uuid": "de70fcd5-6590-4269-b998-e79b82752df9", 00:13:17.191 "strip_size_kb": 0, 00:13:17.191 "state": "online", 00:13:17.191 "raid_level": "raid1", 00:13:17.191 "superblock": false, 00:13:17.191 "num_base_bdevs": 3, 00:13:17.191 "num_base_bdevs_discovered": 3, 00:13:17.191 "num_base_bdevs_operational": 3, 00:13:17.191 "base_bdevs_list": [ 00:13:17.191 { 00:13:17.191 "name": "BaseBdev1", 00:13:17.191 "uuid": "75c52c5e-744d-4c0d-a1ab-2faa25bed781", 00:13:17.191 "is_configured": true, 00:13:17.191 "data_offset": 0, 00:13:17.191 "data_size": 65536 00:13:17.191 }, 00:13:17.191 { 00:13:17.191 "name": "BaseBdev2", 00:13:17.191 "uuid": "6fa313aa-59f8-4ad8-9966-caa18ccc69a6", 00:13:17.191 "is_configured": true, 00:13:17.191 "data_offset": 0, 00:13:17.191 "data_size": 65536 00:13:17.191 }, 00:13:17.191 { 00:13:17.191 "name": "BaseBdev3", 00:13:17.191 "uuid": "a8486144-653f-4dbe-9dbb-725e4425ec77", 00:13:17.191 "is_configured": true, 00:13:17.191 "data_offset": 0, 00:13:17.191 "data_size": 65536 00:13:17.192 } 00:13:17.192 ] 00:13:17.192 } 00:13:17.192 } 00:13:17.192 }' 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:17.192 BaseBdev2 00:13:17.192 BaseBdev3' 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.192 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.192 [2024-11-20 07:10:14.494624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.451 "name": "Existed_Raid", 00:13:17.451 "uuid": "de70fcd5-6590-4269-b998-e79b82752df9", 00:13:17.451 "strip_size_kb": 0, 00:13:17.451 "state": "online", 00:13:17.451 "raid_level": "raid1", 00:13:17.451 "superblock": false, 00:13:17.451 "num_base_bdevs": 3, 00:13:17.451 "num_base_bdevs_discovered": 2, 00:13:17.451 "num_base_bdevs_operational": 2, 00:13:17.451 "base_bdevs_list": [ 00:13:17.451 { 00:13:17.451 "name": null, 00:13:17.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.451 "is_configured": false, 00:13:17.451 "data_offset": 0, 00:13:17.451 "data_size": 65536 00:13:17.451 }, 00:13:17.451 { 00:13:17.451 "name": "BaseBdev2", 00:13:17.451 "uuid": "6fa313aa-59f8-4ad8-9966-caa18ccc69a6", 00:13:17.451 "is_configured": true, 00:13:17.451 "data_offset": 0, 00:13:17.451 "data_size": 65536 00:13:17.451 }, 00:13:17.451 { 00:13:17.451 "name": "BaseBdev3", 00:13:17.451 "uuid": "a8486144-653f-4dbe-9dbb-725e4425ec77", 00:13:17.451 "is_configured": true, 00:13:17.451 "data_offset": 0, 00:13:17.451 "data_size": 65536 00:13:17.451 } 00:13:17.451 ] 00:13:17.451 }' 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.451 07:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.019 [2024-11-20 07:10:15.207321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.019 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.278 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.279 [2024-11-20 07:10:15.391816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:18.279 [2024-11-20 07:10:15.392182] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.279 [2024-11-20 07:10:15.481782] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.279 [2024-11-20 07:10:15.481856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.279 [2024-11-20 07:10:15.481912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.279 BaseBdev2 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.279 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.538 [ 00:13:18.538 { 00:13:18.538 "name": "BaseBdev2", 00:13:18.538 "aliases": [ 00:13:18.538 "1d88bcd8-6fe6-424d-bc74-2bfa795ea30b" 00:13:18.538 ], 00:13:18.538 "product_name": "Malloc disk", 00:13:18.538 "block_size": 512, 00:13:18.538 "num_blocks": 65536, 00:13:18.538 "uuid": "1d88bcd8-6fe6-424d-bc74-2bfa795ea30b", 00:13:18.538 "assigned_rate_limits": { 00:13:18.538 "rw_ios_per_sec": 0, 00:13:18.538 "rw_mbytes_per_sec": 0, 00:13:18.538 "r_mbytes_per_sec": 0, 00:13:18.538 "w_mbytes_per_sec": 0 00:13:18.538 }, 00:13:18.538 "claimed": false, 00:13:18.538 "zoned": false, 00:13:18.538 "supported_io_types": { 00:13:18.538 "read": true, 00:13:18.538 "write": true, 00:13:18.538 "unmap": true, 00:13:18.538 "flush": true, 00:13:18.538 "reset": true, 00:13:18.538 "nvme_admin": false, 00:13:18.538 "nvme_io": false, 00:13:18.538 "nvme_io_md": false, 00:13:18.538 "write_zeroes": true, 00:13:18.538 "zcopy": true, 00:13:18.538 "get_zone_info": false, 00:13:18.538 "zone_management": false, 00:13:18.538 "zone_append": false, 00:13:18.538 "compare": false, 00:13:18.538 "compare_and_write": false, 00:13:18.538 "abort": true, 00:13:18.538 "seek_hole": false, 00:13:18.538 "seek_data": false, 00:13:18.538 "copy": true, 00:13:18.538 "nvme_iov_md": false 00:13:18.538 }, 00:13:18.538 "memory_domains": [ 00:13:18.538 { 00:13:18.538 "dma_device_id": "system", 00:13:18.538 "dma_device_type": 1 00:13:18.538 }, 00:13:18.538 { 00:13:18.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.538 "dma_device_type": 2 00:13:18.538 } 00:13:18.538 ], 00:13:18.538 "driver_specific": {} 00:13:18.538 } 00:13:18.538 ] 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.538 BaseBdev3 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.538 [ 00:13:18.538 { 00:13:18.538 "name": "BaseBdev3", 00:13:18.538 "aliases": [ 00:13:18.538 "79d7ba8e-866d-42dc-b913-7f321f2e2e20" 00:13:18.538 ], 00:13:18.538 "product_name": "Malloc disk", 00:13:18.538 "block_size": 512, 00:13:18.538 "num_blocks": 65536, 00:13:18.538 "uuid": "79d7ba8e-866d-42dc-b913-7f321f2e2e20", 00:13:18.538 "assigned_rate_limits": { 00:13:18.538 "rw_ios_per_sec": 0, 00:13:18.538 "rw_mbytes_per_sec": 0, 00:13:18.538 "r_mbytes_per_sec": 0, 00:13:18.538 "w_mbytes_per_sec": 0 00:13:18.538 }, 00:13:18.538 "claimed": false, 00:13:18.538 "zoned": false, 00:13:18.538 "supported_io_types": { 00:13:18.538 "read": true, 00:13:18.538 "write": true, 00:13:18.538 "unmap": true, 00:13:18.538 "flush": true, 00:13:18.538 "reset": true, 00:13:18.538 "nvme_admin": false, 00:13:18.538 "nvme_io": false, 00:13:18.538 "nvme_io_md": false, 00:13:18.538 "write_zeroes": true, 00:13:18.538 "zcopy": true, 00:13:18.538 "get_zone_info": false, 00:13:18.538 "zone_management": false, 00:13:18.538 "zone_append": false, 00:13:18.538 "compare": false, 00:13:18.538 "compare_and_write": false, 00:13:18.538 "abort": true, 00:13:18.538 "seek_hole": false, 00:13:18.538 "seek_data": false, 00:13:18.538 "copy": true, 00:13:18.538 "nvme_iov_md": false 00:13:18.538 }, 00:13:18.538 "memory_domains": [ 00:13:18.538 { 00:13:18.538 "dma_device_id": "system", 00:13:18.538 "dma_device_type": 1 00:13:18.538 }, 00:13:18.538 { 00:13:18.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.538 "dma_device_type": 2 00:13:18.538 } 00:13:18.538 ], 00:13:18.538 "driver_specific": {} 00:13:18.538 } 00:13:18.538 ] 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.538 [2024-11-20 07:10:15.691951] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:18.538 [2024-11-20 07:10:15.692148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:18.538 [2024-11-20 07:10:15.692282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:18.538 [2024-11-20 07:10:15.694781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.538 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.538 "name": "Existed_Raid", 00:13:18.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.538 "strip_size_kb": 0, 00:13:18.539 "state": "configuring", 00:13:18.539 "raid_level": "raid1", 00:13:18.539 "superblock": false, 00:13:18.539 "num_base_bdevs": 3, 00:13:18.539 "num_base_bdevs_discovered": 2, 00:13:18.539 "num_base_bdevs_operational": 3, 00:13:18.539 "base_bdevs_list": [ 00:13:18.539 { 00:13:18.539 "name": "BaseBdev1", 00:13:18.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.539 "is_configured": false, 00:13:18.539 "data_offset": 0, 00:13:18.539 "data_size": 0 00:13:18.539 }, 00:13:18.539 { 00:13:18.539 "name": "BaseBdev2", 00:13:18.539 "uuid": "1d88bcd8-6fe6-424d-bc74-2bfa795ea30b", 00:13:18.539 "is_configured": true, 00:13:18.539 "data_offset": 0, 00:13:18.539 "data_size": 65536 00:13:18.539 }, 00:13:18.539 { 00:13:18.539 "name": "BaseBdev3", 00:13:18.539 "uuid": "79d7ba8e-866d-42dc-b913-7f321f2e2e20", 00:13:18.539 "is_configured": true, 00:13:18.539 "data_offset": 0, 00:13:18.539 "data_size": 65536 00:13:18.539 } 00:13:18.539 ] 00:13:18.539 }' 00:13:18.539 07:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.539 07:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.106 [2024-11-20 07:10:16.228095] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.106 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.106 "name": "Existed_Raid", 00:13:19.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.106 "strip_size_kb": 0, 00:13:19.106 "state": "configuring", 00:13:19.107 "raid_level": "raid1", 00:13:19.107 "superblock": false, 00:13:19.107 "num_base_bdevs": 3, 00:13:19.107 "num_base_bdevs_discovered": 1, 00:13:19.107 "num_base_bdevs_operational": 3, 00:13:19.107 "base_bdevs_list": [ 00:13:19.107 { 00:13:19.107 "name": "BaseBdev1", 00:13:19.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.107 "is_configured": false, 00:13:19.107 "data_offset": 0, 00:13:19.107 "data_size": 0 00:13:19.107 }, 00:13:19.107 { 00:13:19.107 "name": null, 00:13:19.107 "uuid": "1d88bcd8-6fe6-424d-bc74-2bfa795ea30b", 00:13:19.107 "is_configured": false, 00:13:19.107 "data_offset": 0, 00:13:19.107 "data_size": 65536 00:13:19.107 }, 00:13:19.107 { 00:13:19.107 "name": "BaseBdev3", 00:13:19.107 "uuid": "79d7ba8e-866d-42dc-b913-7f321f2e2e20", 00:13:19.107 "is_configured": true, 00:13:19.107 "data_offset": 0, 00:13:19.107 "data_size": 65536 00:13:19.107 } 00:13:19.107 ] 00:13:19.107 }' 00:13:19.107 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.107 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.675 [2024-11-20 07:10:16.846572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:19.675 BaseBdev1 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.675 [ 00:13:19.675 { 00:13:19.675 "name": "BaseBdev1", 00:13:19.675 "aliases": [ 00:13:19.675 "cf5f9b9b-43c2-43f9-a8fc-7193b3693ba4" 00:13:19.675 ], 00:13:19.675 "product_name": "Malloc disk", 00:13:19.675 "block_size": 512, 00:13:19.675 "num_blocks": 65536, 00:13:19.675 "uuid": "cf5f9b9b-43c2-43f9-a8fc-7193b3693ba4", 00:13:19.675 "assigned_rate_limits": { 00:13:19.675 "rw_ios_per_sec": 0, 00:13:19.675 "rw_mbytes_per_sec": 0, 00:13:19.675 "r_mbytes_per_sec": 0, 00:13:19.675 "w_mbytes_per_sec": 0 00:13:19.675 }, 00:13:19.675 "claimed": true, 00:13:19.675 "claim_type": "exclusive_write", 00:13:19.675 "zoned": false, 00:13:19.675 "supported_io_types": { 00:13:19.675 "read": true, 00:13:19.675 "write": true, 00:13:19.675 "unmap": true, 00:13:19.675 "flush": true, 00:13:19.675 "reset": true, 00:13:19.675 "nvme_admin": false, 00:13:19.675 "nvme_io": false, 00:13:19.675 "nvme_io_md": false, 00:13:19.675 "write_zeroes": true, 00:13:19.675 "zcopy": true, 00:13:19.675 "get_zone_info": false, 00:13:19.675 "zone_management": false, 00:13:19.675 "zone_append": false, 00:13:19.675 "compare": false, 00:13:19.675 "compare_and_write": false, 00:13:19.675 "abort": true, 00:13:19.675 "seek_hole": false, 00:13:19.675 "seek_data": false, 00:13:19.675 "copy": true, 00:13:19.675 "nvme_iov_md": false 00:13:19.675 }, 00:13:19.675 "memory_domains": [ 00:13:19.675 { 00:13:19.675 "dma_device_id": "system", 00:13:19.675 "dma_device_type": 1 00:13:19.675 }, 00:13:19.675 { 00:13:19.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.675 "dma_device_type": 2 00:13:19.675 } 00:13:19.675 ], 00:13:19.675 "driver_specific": {} 00:13:19.675 } 00:13:19.675 ] 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.675 "name": "Existed_Raid", 00:13:19.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.675 "strip_size_kb": 0, 00:13:19.675 "state": "configuring", 00:13:19.675 "raid_level": "raid1", 00:13:19.675 "superblock": false, 00:13:19.675 "num_base_bdevs": 3, 00:13:19.675 "num_base_bdevs_discovered": 2, 00:13:19.675 "num_base_bdevs_operational": 3, 00:13:19.675 "base_bdevs_list": [ 00:13:19.675 { 00:13:19.675 "name": "BaseBdev1", 00:13:19.675 "uuid": "cf5f9b9b-43c2-43f9-a8fc-7193b3693ba4", 00:13:19.675 "is_configured": true, 00:13:19.675 "data_offset": 0, 00:13:19.675 "data_size": 65536 00:13:19.675 }, 00:13:19.675 { 00:13:19.675 "name": null, 00:13:19.675 "uuid": "1d88bcd8-6fe6-424d-bc74-2bfa795ea30b", 00:13:19.675 "is_configured": false, 00:13:19.675 "data_offset": 0, 00:13:19.675 "data_size": 65536 00:13:19.675 }, 00:13:19.675 { 00:13:19.675 "name": "BaseBdev3", 00:13:19.675 "uuid": "79d7ba8e-866d-42dc-b913-7f321f2e2e20", 00:13:19.675 "is_configured": true, 00:13:19.675 "data_offset": 0, 00:13:19.675 "data_size": 65536 00:13:19.675 } 00:13:19.675 ] 00:13:19.675 }' 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.675 07:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.244 [2024-11-20 07:10:17.498789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.244 "name": "Existed_Raid", 00:13:20.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.244 "strip_size_kb": 0, 00:13:20.244 "state": "configuring", 00:13:20.244 "raid_level": "raid1", 00:13:20.244 "superblock": false, 00:13:20.244 "num_base_bdevs": 3, 00:13:20.244 "num_base_bdevs_discovered": 1, 00:13:20.244 "num_base_bdevs_operational": 3, 00:13:20.244 "base_bdevs_list": [ 00:13:20.244 { 00:13:20.244 "name": "BaseBdev1", 00:13:20.244 "uuid": "cf5f9b9b-43c2-43f9-a8fc-7193b3693ba4", 00:13:20.244 "is_configured": true, 00:13:20.244 "data_offset": 0, 00:13:20.244 "data_size": 65536 00:13:20.244 }, 00:13:20.244 { 00:13:20.244 "name": null, 00:13:20.244 "uuid": "1d88bcd8-6fe6-424d-bc74-2bfa795ea30b", 00:13:20.244 "is_configured": false, 00:13:20.244 "data_offset": 0, 00:13:20.244 "data_size": 65536 00:13:20.244 }, 00:13:20.244 { 00:13:20.244 "name": null, 00:13:20.244 "uuid": "79d7ba8e-866d-42dc-b913-7f321f2e2e20", 00:13:20.244 "is_configured": false, 00:13:20.244 "data_offset": 0, 00:13:20.244 "data_size": 65536 00:13:20.244 } 00:13:20.244 ] 00:13:20.244 }' 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.244 07:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.816 [2024-11-20 07:10:18.095007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.816 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.075 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.075 "name": "Existed_Raid", 00:13:21.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.075 "strip_size_kb": 0, 00:13:21.075 "state": "configuring", 00:13:21.075 "raid_level": "raid1", 00:13:21.075 "superblock": false, 00:13:21.075 "num_base_bdevs": 3, 00:13:21.075 "num_base_bdevs_discovered": 2, 00:13:21.075 "num_base_bdevs_operational": 3, 00:13:21.075 "base_bdevs_list": [ 00:13:21.075 { 00:13:21.075 "name": "BaseBdev1", 00:13:21.075 "uuid": "cf5f9b9b-43c2-43f9-a8fc-7193b3693ba4", 00:13:21.075 "is_configured": true, 00:13:21.075 "data_offset": 0, 00:13:21.075 "data_size": 65536 00:13:21.075 }, 00:13:21.075 { 00:13:21.075 "name": null, 00:13:21.075 "uuid": "1d88bcd8-6fe6-424d-bc74-2bfa795ea30b", 00:13:21.075 "is_configured": false, 00:13:21.075 "data_offset": 0, 00:13:21.075 "data_size": 65536 00:13:21.075 }, 00:13:21.075 { 00:13:21.075 "name": "BaseBdev3", 00:13:21.075 "uuid": "79d7ba8e-866d-42dc-b913-7f321f2e2e20", 00:13:21.075 "is_configured": true, 00:13:21.075 "data_offset": 0, 00:13:21.075 "data_size": 65536 00:13:21.075 } 00:13:21.075 ] 00:13:21.075 }' 00:13:21.075 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.075 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.333 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.333 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.333 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.333 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:21.333 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.592 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:21.592 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:21.592 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.592 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.592 [2024-11-20 07:10:18.675206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.592 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.592 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:21.592 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.592 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.592 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.592 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.592 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:21.592 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.592 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.593 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.593 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.593 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.593 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.593 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.593 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.593 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.593 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.593 "name": "Existed_Raid", 00:13:21.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.593 "strip_size_kb": 0, 00:13:21.593 "state": "configuring", 00:13:21.593 "raid_level": "raid1", 00:13:21.593 "superblock": false, 00:13:21.593 "num_base_bdevs": 3, 00:13:21.593 "num_base_bdevs_discovered": 1, 00:13:21.593 "num_base_bdevs_operational": 3, 00:13:21.593 "base_bdevs_list": [ 00:13:21.593 { 00:13:21.593 "name": null, 00:13:21.593 "uuid": "cf5f9b9b-43c2-43f9-a8fc-7193b3693ba4", 00:13:21.593 "is_configured": false, 00:13:21.593 "data_offset": 0, 00:13:21.593 "data_size": 65536 00:13:21.593 }, 00:13:21.593 { 00:13:21.593 "name": null, 00:13:21.593 "uuid": "1d88bcd8-6fe6-424d-bc74-2bfa795ea30b", 00:13:21.593 "is_configured": false, 00:13:21.593 "data_offset": 0, 00:13:21.593 "data_size": 65536 00:13:21.593 }, 00:13:21.593 { 00:13:21.593 "name": "BaseBdev3", 00:13:21.593 "uuid": "79d7ba8e-866d-42dc-b913-7f321f2e2e20", 00:13:21.593 "is_configured": true, 00:13:21.593 "data_offset": 0, 00:13:21.593 "data_size": 65536 00:13:21.593 } 00:13:21.593 ] 00:13:21.593 }' 00:13:21.593 07:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.593 07:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.159 [2024-11-20 07:10:19.362226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.159 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.159 "name": "Existed_Raid", 00:13:22.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.159 "strip_size_kb": 0, 00:13:22.159 "state": "configuring", 00:13:22.159 "raid_level": "raid1", 00:13:22.159 "superblock": false, 00:13:22.159 "num_base_bdevs": 3, 00:13:22.159 "num_base_bdevs_discovered": 2, 00:13:22.159 "num_base_bdevs_operational": 3, 00:13:22.159 "base_bdevs_list": [ 00:13:22.159 { 00:13:22.159 "name": null, 00:13:22.159 "uuid": "cf5f9b9b-43c2-43f9-a8fc-7193b3693ba4", 00:13:22.159 "is_configured": false, 00:13:22.159 "data_offset": 0, 00:13:22.159 "data_size": 65536 00:13:22.159 }, 00:13:22.159 { 00:13:22.159 "name": "BaseBdev2", 00:13:22.159 "uuid": "1d88bcd8-6fe6-424d-bc74-2bfa795ea30b", 00:13:22.160 "is_configured": true, 00:13:22.160 "data_offset": 0, 00:13:22.160 "data_size": 65536 00:13:22.160 }, 00:13:22.160 { 00:13:22.160 "name": "BaseBdev3", 00:13:22.160 "uuid": "79d7ba8e-866d-42dc-b913-7f321f2e2e20", 00:13:22.160 "is_configured": true, 00:13:22.160 "data_offset": 0, 00:13:22.160 "data_size": 65536 00:13:22.160 } 00:13:22.160 ] 00:13:22.160 }' 00:13:22.160 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.160 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.726 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:22.726 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.726 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.726 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.726 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.726 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:22.726 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.726 07:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:22.726 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.726 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.726 07:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.726 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cf5f9b9b-43c2-43f9-a8fc-7193b3693ba4 00:13:22.726 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.726 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.726 [2024-11-20 07:10:20.040516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:22.726 [2024-11-20 07:10:20.040578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:22.726 [2024-11-20 07:10:20.040590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:22.726 [2024-11-20 07:10:20.040941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:22.726 [2024-11-20 07:10:20.041161] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:22.726 [2024-11-20 07:10:20.041183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:22.726 NewBaseBdev 00:13:22.726 [2024-11-20 07:10:20.041480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.985 [ 00:13:22.985 { 00:13:22.985 "name": "NewBaseBdev", 00:13:22.985 "aliases": [ 00:13:22.985 "cf5f9b9b-43c2-43f9-a8fc-7193b3693ba4" 00:13:22.985 ], 00:13:22.985 "product_name": "Malloc disk", 00:13:22.985 "block_size": 512, 00:13:22.985 "num_blocks": 65536, 00:13:22.985 "uuid": "cf5f9b9b-43c2-43f9-a8fc-7193b3693ba4", 00:13:22.985 "assigned_rate_limits": { 00:13:22.985 "rw_ios_per_sec": 0, 00:13:22.985 "rw_mbytes_per_sec": 0, 00:13:22.985 "r_mbytes_per_sec": 0, 00:13:22.985 "w_mbytes_per_sec": 0 00:13:22.985 }, 00:13:22.985 "claimed": true, 00:13:22.985 "claim_type": "exclusive_write", 00:13:22.985 "zoned": false, 00:13:22.985 "supported_io_types": { 00:13:22.985 "read": true, 00:13:22.985 "write": true, 00:13:22.985 "unmap": true, 00:13:22.985 "flush": true, 00:13:22.985 "reset": true, 00:13:22.985 "nvme_admin": false, 00:13:22.985 "nvme_io": false, 00:13:22.985 "nvme_io_md": false, 00:13:22.985 "write_zeroes": true, 00:13:22.985 "zcopy": true, 00:13:22.985 "get_zone_info": false, 00:13:22.985 "zone_management": false, 00:13:22.985 "zone_append": false, 00:13:22.985 "compare": false, 00:13:22.985 "compare_and_write": false, 00:13:22.985 "abort": true, 00:13:22.985 "seek_hole": false, 00:13:22.985 "seek_data": false, 00:13:22.985 "copy": true, 00:13:22.985 "nvme_iov_md": false 00:13:22.985 }, 00:13:22.985 "memory_domains": [ 00:13:22.985 { 00:13:22.985 "dma_device_id": "system", 00:13:22.985 "dma_device_type": 1 00:13:22.985 }, 00:13:22.985 { 00:13:22.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.985 "dma_device_type": 2 00:13:22.985 } 00:13:22.985 ], 00:13:22.985 "driver_specific": {} 00:13:22.985 } 00:13:22.985 ] 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.985 "name": "Existed_Raid", 00:13:22.985 "uuid": "0960e113-a0dd-4b39-9dc0-b3e2cdb22bff", 00:13:22.985 "strip_size_kb": 0, 00:13:22.985 "state": "online", 00:13:22.985 "raid_level": "raid1", 00:13:22.985 "superblock": false, 00:13:22.985 "num_base_bdevs": 3, 00:13:22.985 "num_base_bdevs_discovered": 3, 00:13:22.985 "num_base_bdevs_operational": 3, 00:13:22.985 "base_bdevs_list": [ 00:13:22.985 { 00:13:22.985 "name": "NewBaseBdev", 00:13:22.985 "uuid": "cf5f9b9b-43c2-43f9-a8fc-7193b3693ba4", 00:13:22.985 "is_configured": true, 00:13:22.985 "data_offset": 0, 00:13:22.985 "data_size": 65536 00:13:22.985 }, 00:13:22.985 { 00:13:22.985 "name": "BaseBdev2", 00:13:22.985 "uuid": "1d88bcd8-6fe6-424d-bc74-2bfa795ea30b", 00:13:22.985 "is_configured": true, 00:13:22.985 "data_offset": 0, 00:13:22.985 "data_size": 65536 00:13:22.985 }, 00:13:22.985 { 00:13:22.985 "name": "BaseBdev3", 00:13:22.985 "uuid": "79d7ba8e-866d-42dc-b913-7f321f2e2e20", 00:13:22.985 "is_configured": true, 00:13:22.985 "data_offset": 0, 00:13:22.985 "data_size": 65536 00:13:22.985 } 00:13:22.985 ] 00:13:22.985 }' 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.985 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.284 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:23.284 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:23.284 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:23.284 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:23.285 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:23.285 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:23.285 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:23.285 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:23.285 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.285 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.285 [2024-11-20 07:10:20.597122] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:23.550 "name": "Existed_Raid", 00:13:23.550 "aliases": [ 00:13:23.550 "0960e113-a0dd-4b39-9dc0-b3e2cdb22bff" 00:13:23.550 ], 00:13:23.550 "product_name": "Raid Volume", 00:13:23.550 "block_size": 512, 00:13:23.550 "num_blocks": 65536, 00:13:23.550 "uuid": "0960e113-a0dd-4b39-9dc0-b3e2cdb22bff", 00:13:23.550 "assigned_rate_limits": { 00:13:23.550 "rw_ios_per_sec": 0, 00:13:23.550 "rw_mbytes_per_sec": 0, 00:13:23.550 "r_mbytes_per_sec": 0, 00:13:23.550 "w_mbytes_per_sec": 0 00:13:23.550 }, 00:13:23.550 "claimed": false, 00:13:23.550 "zoned": false, 00:13:23.550 "supported_io_types": { 00:13:23.550 "read": true, 00:13:23.550 "write": true, 00:13:23.550 "unmap": false, 00:13:23.550 "flush": false, 00:13:23.550 "reset": true, 00:13:23.550 "nvme_admin": false, 00:13:23.550 "nvme_io": false, 00:13:23.550 "nvme_io_md": false, 00:13:23.550 "write_zeroes": true, 00:13:23.550 "zcopy": false, 00:13:23.550 "get_zone_info": false, 00:13:23.550 "zone_management": false, 00:13:23.550 "zone_append": false, 00:13:23.550 "compare": false, 00:13:23.550 "compare_and_write": false, 00:13:23.550 "abort": false, 00:13:23.550 "seek_hole": false, 00:13:23.550 "seek_data": false, 00:13:23.550 "copy": false, 00:13:23.550 "nvme_iov_md": false 00:13:23.550 }, 00:13:23.550 "memory_domains": [ 00:13:23.550 { 00:13:23.550 "dma_device_id": "system", 00:13:23.550 "dma_device_type": 1 00:13:23.550 }, 00:13:23.550 { 00:13:23.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.550 "dma_device_type": 2 00:13:23.550 }, 00:13:23.550 { 00:13:23.550 "dma_device_id": "system", 00:13:23.550 "dma_device_type": 1 00:13:23.550 }, 00:13:23.550 { 00:13:23.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.550 "dma_device_type": 2 00:13:23.550 }, 00:13:23.550 { 00:13:23.550 "dma_device_id": "system", 00:13:23.550 "dma_device_type": 1 00:13:23.550 }, 00:13:23.550 { 00:13:23.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.550 "dma_device_type": 2 00:13:23.550 } 00:13:23.550 ], 00:13:23.550 "driver_specific": { 00:13:23.550 "raid": { 00:13:23.550 "uuid": "0960e113-a0dd-4b39-9dc0-b3e2cdb22bff", 00:13:23.550 "strip_size_kb": 0, 00:13:23.550 "state": "online", 00:13:23.550 "raid_level": "raid1", 00:13:23.550 "superblock": false, 00:13:23.550 "num_base_bdevs": 3, 00:13:23.550 "num_base_bdevs_discovered": 3, 00:13:23.550 "num_base_bdevs_operational": 3, 00:13:23.550 "base_bdevs_list": [ 00:13:23.550 { 00:13:23.550 "name": "NewBaseBdev", 00:13:23.550 "uuid": "cf5f9b9b-43c2-43f9-a8fc-7193b3693ba4", 00:13:23.550 "is_configured": true, 00:13:23.550 "data_offset": 0, 00:13:23.550 "data_size": 65536 00:13:23.550 }, 00:13:23.550 { 00:13:23.550 "name": "BaseBdev2", 00:13:23.550 "uuid": "1d88bcd8-6fe6-424d-bc74-2bfa795ea30b", 00:13:23.550 "is_configured": true, 00:13:23.550 "data_offset": 0, 00:13:23.550 "data_size": 65536 00:13:23.550 }, 00:13:23.550 { 00:13:23.550 "name": "BaseBdev3", 00:13:23.550 "uuid": "79d7ba8e-866d-42dc-b913-7f321f2e2e20", 00:13:23.550 "is_configured": true, 00:13:23.550 "data_offset": 0, 00:13:23.550 "data_size": 65536 00:13:23.550 } 00:13:23.550 ] 00:13:23.550 } 00:13:23.550 } 00:13:23.550 }' 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:23.550 BaseBdev2 00:13:23.550 BaseBdev3' 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.550 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.551 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.551 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:23.551 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.551 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.551 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.551 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.809 [2024-11-20 07:10:20.904812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:23.809 [2024-11-20 07:10:20.904994] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:23.809 [2024-11-20 07:10:20.905190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:23.809 [2024-11-20 07:10:20.905684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:23.809 [2024-11-20 07:10:20.905839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67369 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67369 ']' 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67369 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67369 00:13:23.809 killing process with pid 67369 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67369' 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67369 00:13:23.809 [2024-11-20 07:10:20.943906] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:23.809 07:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67369 00:13:24.068 [2024-11-20 07:10:21.213338] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:25.004 07:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:25.004 00:13:25.004 real 0m12.140s 00:13:25.004 user 0m20.237s 00:13:25.004 sys 0m1.659s 00:13:25.004 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.004 ************************************ 00:13:25.004 END TEST raid_state_function_test 00:13:25.004 ************************************ 00:13:25.004 07:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.004 07:10:22 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:13:25.004 07:10:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:25.004 07:10:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.004 07:10:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:25.004 ************************************ 00:13:25.004 START TEST raid_state_function_test_sb 00:13:25.004 ************************************ 00:13:25.004 07:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:13:25.004 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:25.004 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:13:25.004 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:25.004 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:25.004 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:25.004 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:25.005 Process raid pid: 68007 00:13:25.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68007 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68007' 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68007 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68007 ']' 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.005 07:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.263 [2024-11-20 07:10:22.403720] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:13:25.263 [2024-11-20 07:10:22.405915] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.521 [2024-11-20 07:10:22.602009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.521 [2024-11-20 07:10:22.732107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.780 [2024-11-20 07:10:22.939249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:25.780 [2024-11-20 07:10:22.939307] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.347 [2024-11-20 07:10:23.422992] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.347 [2024-11-20 07:10:23.423210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.347 [2024-11-20 07:10:23.423338] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.347 [2024-11-20 07:10:23.423413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.347 [2024-11-20 07:10:23.423660] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:26.347 [2024-11-20 07:10:23.423747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.347 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.348 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.348 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.348 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.348 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.348 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.348 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.348 "name": "Existed_Raid", 00:13:26.348 "uuid": "2d89191e-1fcf-4d00-860a-e3e38d29c0cf", 00:13:26.348 "strip_size_kb": 0, 00:13:26.348 "state": "configuring", 00:13:26.348 "raid_level": "raid1", 00:13:26.348 "superblock": true, 00:13:26.348 "num_base_bdevs": 3, 00:13:26.348 "num_base_bdevs_discovered": 0, 00:13:26.348 "num_base_bdevs_operational": 3, 00:13:26.348 "base_bdevs_list": [ 00:13:26.348 { 00:13:26.348 "name": "BaseBdev1", 00:13:26.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.348 "is_configured": false, 00:13:26.348 "data_offset": 0, 00:13:26.348 "data_size": 0 00:13:26.348 }, 00:13:26.348 { 00:13:26.348 "name": "BaseBdev2", 00:13:26.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.348 "is_configured": false, 00:13:26.348 "data_offset": 0, 00:13:26.348 "data_size": 0 00:13:26.348 }, 00:13:26.348 { 00:13:26.348 "name": "BaseBdev3", 00:13:26.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.348 "is_configured": false, 00:13:26.348 "data_offset": 0, 00:13:26.348 "data_size": 0 00:13:26.348 } 00:13:26.348 ] 00:13:26.348 }' 00:13:26.348 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.348 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.606 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:26.606 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.606 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.606 [2024-11-20 07:10:23.915099] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:26.606 [2024-11-20 07:10:23.915145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:26.606 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.606 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:26.606 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.606 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.606 [2024-11-20 07:10:23.923072] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:26.606 [2024-11-20 07:10:23.923259] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:26.606 [2024-11-20 07:10:23.923379] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:26.606 [2024-11-20 07:10:23.923543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:26.606 [2024-11-20 07:10:23.923660] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:26.606 [2024-11-20 07:10:23.923761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.866 [2024-11-20 07:10:23.968797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.866 BaseBdev1 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.866 [ 00:13:26.866 { 00:13:26.866 "name": "BaseBdev1", 00:13:26.866 "aliases": [ 00:13:26.866 "dc6907fa-67b7-4bb2-8f24-d94a74395953" 00:13:26.866 ], 00:13:26.866 "product_name": "Malloc disk", 00:13:26.866 "block_size": 512, 00:13:26.866 "num_blocks": 65536, 00:13:26.866 "uuid": "dc6907fa-67b7-4bb2-8f24-d94a74395953", 00:13:26.866 "assigned_rate_limits": { 00:13:26.866 "rw_ios_per_sec": 0, 00:13:26.866 "rw_mbytes_per_sec": 0, 00:13:26.866 "r_mbytes_per_sec": 0, 00:13:26.866 "w_mbytes_per_sec": 0 00:13:26.866 }, 00:13:26.866 "claimed": true, 00:13:26.866 "claim_type": "exclusive_write", 00:13:26.866 "zoned": false, 00:13:26.866 "supported_io_types": { 00:13:26.866 "read": true, 00:13:26.866 "write": true, 00:13:26.866 "unmap": true, 00:13:26.866 "flush": true, 00:13:26.866 "reset": true, 00:13:26.866 "nvme_admin": false, 00:13:26.866 "nvme_io": false, 00:13:26.866 "nvme_io_md": false, 00:13:26.866 "write_zeroes": true, 00:13:26.866 "zcopy": true, 00:13:26.866 "get_zone_info": false, 00:13:26.866 "zone_management": false, 00:13:26.866 "zone_append": false, 00:13:26.866 "compare": false, 00:13:26.866 "compare_and_write": false, 00:13:26.866 "abort": true, 00:13:26.866 "seek_hole": false, 00:13:26.866 "seek_data": false, 00:13:26.866 "copy": true, 00:13:26.866 "nvme_iov_md": false 00:13:26.866 }, 00:13:26.866 "memory_domains": [ 00:13:26.866 { 00:13:26.866 "dma_device_id": "system", 00:13:26.866 "dma_device_type": 1 00:13:26.866 }, 00:13:26.866 { 00:13:26.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.866 "dma_device_type": 2 00:13:26.866 } 00:13:26.866 ], 00:13:26.866 "driver_specific": {} 00:13:26.866 } 00:13:26.866 ] 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.866 07:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.866 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.866 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.866 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.866 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.866 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.866 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.866 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.866 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.866 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.866 "name": "Existed_Raid", 00:13:26.866 "uuid": "8f389c9f-7997-4d10-9398-1485abd4ddfa", 00:13:26.866 "strip_size_kb": 0, 00:13:26.866 "state": "configuring", 00:13:26.866 "raid_level": "raid1", 00:13:26.866 "superblock": true, 00:13:26.866 "num_base_bdevs": 3, 00:13:26.866 "num_base_bdevs_discovered": 1, 00:13:26.866 "num_base_bdevs_operational": 3, 00:13:26.866 "base_bdevs_list": [ 00:13:26.866 { 00:13:26.866 "name": "BaseBdev1", 00:13:26.866 "uuid": "dc6907fa-67b7-4bb2-8f24-d94a74395953", 00:13:26.866 "is_configured": true, 00:13:26.866 "data_offset": 2048, 00:13:26.866 "data_size": 63488 00:13:26.866 }, 00:13:26.866 { 00:13:26.866 "name": "BaseBdev2", 00:13:26.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.866 "is_configured": false, 00:13:26.866 "data_offset": 0, 00:13:26.866 "data_size": 0 00:13:26.866 }, 00:13:26.866 { 00:13:26.866 "name": "BaseBdev3", 00:13:26.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.866 "is_configured": false, 00:13:26.866 "data_offset": 0, 00:13:26.866 "data_size": 0 00:13:26.866 } 00:13:26.866 ] 00:13:26.866 }' 00:13:26.866 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.866 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.435 [2024-11-20 07:10:24.493036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:27.435 [2024-11-20 07:10:24.493099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.435 [2024-11-20 07:10:24.501054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.435 [2024-11-20 07:10:24.503622] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:27.435 [2024-11-20 07:10:24.503838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:27.435 [2024-11-20 07:10:24.503982] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:27.435 [2024-11-20 07:10:24.504140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.435 "name": "Existed_Raid", 00:13:27.435 "uuid": "17899ffb-0918-4734-939f-5e5d44d9f96b", 00:13:27.435 "strip_size_kb": 0, 00:13:27.435 "state": "configuring", 00:13:27.435 "raid_level": "raid1", 00:13:27.435 "superblock": true, 00:13:27.435 "num_base_bdevs": 3, 00:13:27.435 "num_base_bdevs_discovered": 1, 00:13:27.435 "num_base_bdevs_operational": 3, 00:13:27.435 "base_bdevs_list": [ 00:13:27.435 { 00:13:27.435 "name": "BaseBdev1", 00:13:27.435 "uuid": "dc6907fa-67b7-4bb2-8f24-d94a74395953", 00:13:27.435 "is_configured": true, 00:13:27.435 "data_offset": 2048, 00:13:27.435 "data_size": 63488 00:13:27.435 }, 00:13:27.435 { 00:13:27.435 "name": "BaseBdev2", 00:13:27.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.435 "is_configured": false, 00:13:27.435 "data_offset": 0, 00:13:27.435 "data_size": 0 00:13:27.435 }, 00:13:27.435 { 00:13:27.435 "name": "BaseBdev3", 00:13:27.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.435 "is_configured": false, 00:13:27.435 "data_offset": 0, 00:13:27.435 "data_size": 0 00:13:27.435 } 00:13:27.435 ] 00:13:27.435 }' 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.435 07:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.002 BaseBdev2 00:13:28.002 [2024-11-20 07:10:25.108246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.002 [ 00:13:28.002 { 00:13:28.002 "name": "BaseBdev2", 00:13:28.002 "aliases": [ 00:13:28.002 "5fc9cf67-12df-4c01-b177-949bf1785e7d" 00:13:28.002 ], 00:13:28.002 "product_name": "Malloc disk", 00:13:28.002 "block_size": 512, 00:13:28.002 "num_blocks": 65536, 00:13:28.002 "uuid": "5fc9cf67-12df-4c01-b177-949bf1785e7d", 00:13:28.002 "assigned_rate_limits": { 00:13:28.002 "rw_ios_per_sec": 0, 00:13:28.002 "rw_mbytes_per_sec": 0, 00:13:28.002 "r_mbytes_per_sec": 0, 00:13:28.002 "w_mbytes_per_sec": 0 00:13:28.002 }, 00:13:28.002 "claimed": true, 00:13:28.002 "claim_type": "exclusive_write", 00:13:28.002 "zoned": false, 00:13:28.002 "supported_io_types": { 00:13:28.002 "read": true, 00:13:28.002 "write": true, 00:13:28.002 "unmap": true, 00:13:28.002 "flush": true, 00:13:28.002 "reset": true, 00:13:28.002 "nvme_admin": false, 00:13:28.002 "nvme_io": false, 00:13:28.002 "nvme_io_md": false, 00:13:28.002 "write_zeroes": true, 00:13:28.002 "zcopy": true, 00:13:28.002 "get_zone_info": false, 00:13:28.002 "zone_management": false, 00:13:28.002 "zone_append": false, 00:13:28.002 "compare": false, 00:13:28.002 "compare_and_write": false, 00:13:28.002 "abort": true, 00:13:28.002 "seek_hole": false, 00:13:28.002 "seek_data": false, 00:13:28.002 "copy": true, 00:13:28.002 "nvme_iov_md": false 00:13:28.002 }, 00:13:28.002 "memory_domains": [ 00:13:28.002 { 00:13:28.002 "dma_device_id": "system", 00:13:28.002 "dma_device_type": 1 00:13:28.002 }, 00:13:28.002 { 00:13:28.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.002 "dma_device_type": 2 00:13:28.002 } 00:13:28.002 ], 00:13:28.002 "driver_specific": {} 00:13:28.002 } 00:13:28.002 ] 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.002 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.003 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.003 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.003 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.003 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.003 "name": "Existed_Raid", 00:13:28.003 "uuid": "17899ffb-0918-4734-939f-5e5d44d9f96b", 00:13:28.003 "strip_size_kb": 0, 00:13:28.003 "state": "configuring", 00:13:28.003 "raid_level": "raid1", 00:13:28.003 "superblock": true, 00:13:28.003 "num_base_bdevs": 3, 00:13:28.003 "num_base_bdevs_discovered": 2, 00:13:28.003 "num_base_bdevs_operational": 3, 00:13:28.003 "base_bdevs_list": [ 00:13:28.003 { 00:13:28.003 "name": "BaseBdev1", 00:13:28.003 "uuid": "dc6907fa-67b7-4bb2-8f24-d94a74395953", 00:13:28.003 "is_configured": true, 00:13:28.003 "data_offset": 2048, 00:13:28.003 "data_size": 63488 00:13:28.003 }, 00:13:28.003 { 00:13:28.003 "name": "BaseBdev2", 00:13:28.003 "uuid": "5fc9cf67-12df-4c01-b177-949bf1785e7d", 00:13:28.003 "is_configured": true, 00:13:28.003 "data_offset": 2048, 00:13:28.003 "data_size": 63488 00:13:28.003 }, 00:13:28.003 { 00:13:28.003 "name": "BaseBdev3", 00:13:28.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.003 "is_configured": false, 00:13:28.003 "data_offset": 0, 00:13:28.003 "data_size": 0 00:13:28.003 } 00:13:28.003 ] 00:13:28.003 }' 00:13:28.003 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.003 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.570 [2024-11-20 07:10:25.705917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:28.570 [2024-11-20 07:10:25.706211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:28.570 [2024-11-20 07:10:25.706242] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:28.570 BaseBdev3 00:13:28.570 [2024-11-20 07:10:25.706587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:28.570 [2024-11-20 07:10:25.706786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:28.570 [2024-11-20 07:10:25.706803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:28.570 [2024-11-20 07:10:25.706999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.570 [ 00:13:28.570 { 00:13:28.570 "name": "BaseBdev3", 00:13:28.570 "aliases": [ 00:13:28.570 "f9851f8a-6be4-4e87-871f-4b1d98a6e55a" 00:13:28.570 ], 00:13:28.570 "product_name": "Malloc disk", 00:13:28.570 "block_size": 512, 00:13:28.570 "num_blocks": 65536, 00:13:28.570 "uuid": "f9851f8a-6be4-4e87-871f-4b1d98a6e55a", 00:13:28.570 "assigned_rate_limits": { 00:13:28.570 "rw_ios_per_sec": 0, 00:13:28.570 "rw_mbytes_per_sec": 0, 00:13:28.570 "r_mbytes_per_sec": 0, 00:13:28.570 "w_mbytes_per_sec": 0 00:13:28.570 }, 00:13:28.570 "claimed": true, 00:13:28.570 "claim_type": "exclusive_write", 00:13:28.570 "zoned": false, 00:13:28.570 "supported_io_types": { 00:13:28.570 "read": true, 00:13:28.570 "write": true, 00:13:28.570 "unmap": true, 00:13:28.570 "flush": true, 00:13:28.570 "reset": true, 00:13:28.570 "nvme_admin": false, 00:13:28.570 "nvme_io": false, 00:13:28.570 "nvme_io_md": false, 00:13:28.570 "write_zeroes": true, 00:13:28.570 "zcopy": true, 00:13:28.570 "get_zone_info": false, 00:13:28.570 "zone_management": false, 00:13:28.570 "zone_append": false, 00:13:28.570 "compare": false, 00:13:28.570 "compare_and_write": false, 00:13:28.570 "abort": true, 00:13:28.570 "seek_hole": false, 00:13:28.570 "seek_data": false, 00:13:28.570 "copy": true, 00:13:28.570 "nvme_iov_md": false 00:13:28.570 }, 00:13:28.570 "memory_domains": [ 00:13:28.570 { 00:13:28.570 "dma_device_id": "system", 00:13:28.570 "dma_device_type": 1 00:13:28.570 }, 00:13:28.570 { 00:13:28.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.570 "dma_device_type": 2 00:13:28.570 } 00:13:28.570 ], 00:13:28.570 "driver_specific": {} 00:13:28.570 } 00:13:28.570 ] 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.570 "name": "Existed_Raid", 00:13:28.570 "uuid": "17899ffb-0918-4734-939f-5e5d44d9f96b", 00:13:28.570 "strip_size_kb": 0, 00:13:28.570 "state": "online", 00:13:28.570 "raid_level": "raid1", 00:13:28.570 "superblock": true, 00:13:28.570 "num_base_bdevs": 3, 00:13:28.570 "num_base_bdevs_discovered": 3, 00:13:28.570 "num_base_bdevs_operational": 3, 00:13:28.570 "base_bdevs_list": [ 00:13:28.570 { 00:13:28.570 "name": "BaseBdev1", 00:13:28.570 "uuid": "dc6907fa-67b7-4bb2-8f24-d94a74395953", 00:13:28.570 "is_configured": true, 00:13:28.570 "data_offset": 2048, 00:13:28.570 "data_size": 63488 00:13:28.570 }, 00:13:28.570 { 00:13:28.570 "name": "BaseBdev2", 00:13:28.570 "uuid": "5fc9cf67-12df-4c01-b177-949bf1785e7d", 00:13:28.570 "is_configured": true, 00:13:28.570 "data_offset": 2048, 00:13:28.570 "data_size": 63488 00:13:28.570 }, 00:13:28.570 { 00:13:28.570 "name": "BaseBdev3", 00:13:28.570 "uuid": "f9851f8a-6be4-4e87-871f-4b1d98a6e55a", 00:13:28.570 "is_configured": true, 00:13:28.570 "data_offset": 2048, 00:13:28.570 "data_size": 63488 00:13:28.570 } 00:13:28.570 ] 00:13:28.570 }' 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.570 07:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.138 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:29.138 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:29.138 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:29.138 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:29.138 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:29.138 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:29.138 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:29.138 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:29.138 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.138 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.138 [2024-11-20 07:10:26.254522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:29.138 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.138 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:29.138 "name": "Existed_Raid", 00:13:29.138 "aliases": [ 00:13:29.138 "17899ffb-0918-4734-939f-5e5d44d9f96b" 00:13:29.138 ], 00:13:29.138 "product_name": "Raid Volume", 00:13:29.138 "block_size": 512, 00:13:29.138 "num_blocks": 63488, 00:13:29.138 "uuid": "17899ffb-0918-4734-939f-5e5d44d9f96b", 00:13:29.138 "assigned_rate_limits": { 00:13:29.138 "rw_ios_per_sec": 0, 00:13:29.138 "rw_mbytes_per_sec": 0, 00:13:29.138 "r_mbytes_per_sec": 0, 00:13:29.138 "w_mbytes_per_sec": 0 00:13:29.138 }, 00:13:29.138 "claimed": false, 00:13:29.138 "zoned": false, 00:13:29.138 "supported_io_types": { 00:13:29.138 "read": true, 00:13:29.138 "write": true, 00:13:29.138 "unmap": false, 00:13:29.138 "flush": false, 00:13:29.138 "reset": true, 00:13:29.138 "nvme_admin": false, 00:13:29.138 "nvme_io": false, 00:13:29.138 "nvme_io_md": false, 00:13:29.138 "write_zeroes": true, 00:13:29.138 "zcopy": false, 00:13:29.138 "get_zone_info": false, 00:13:29.138 "zone_management": false, 00:13:29.138 "zone_append": false, 00:13:29.138 "compare": false, 00:13:29.138 "compare_and_write": false, 00:13:29.138 "abort": false, 00:13:29.138 "seek_hole": false, 00:13:29.138 "seek_data": false, 00:13:29.138 "copy": false, 00:13:29.138 "nvme_iov_md": false 00:13:29.138 }, 00:13:29.138 "memory_domains": [ 00:13:29.138 { 00:13:29.138 "dma_device_id": "system", 00:13:29.138 "dma_device_type": 1 00:13:29.138 }, 00:13:29.138 { 00:13:29.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.138 "dma_device_type": 2 00:13:29.138 }, 00:13:29.138 { 00:13:29.138 "dma_device_id": "system", 00:13:29.138 "dma_device_type": 1 00:13:29.138 }, 00:13:29.138 { 00:13:29.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.138 "dma_device_type": 2 00:13:29.138 }, 00:13:29.138 { 00:13:29.138 "dma_device_id": "system", 00:13:29.138 "dma_device_type": 1 00:13:29.138 }, 00:13:29.138 { 00:13:29.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.138 "dma_device_type": 2 00:13:29.138 } 00:13:29.138 ], 00:13:29.138 "driver_specific": { 00:13:29.138 "raid": { 00:13:29.138 "uuid": "17899ffb-0918-4734-939f-5e5d44d9f96b", 00:13:29.138 "strip_size_kb": 0, 00:13:29.138 "state": "online", 00:13:29.138 "raid_level": "raid1", 00:13:29.138 "superblock": true, 00:13:29.138 "num_base_bdevs": 3, 00:13:29.138 "num_base_bdevs_discovered": 3, 00:13:29.139 "num_base_bdevs_operational": 3, 00:13:29.139 "base_bdevs_list": [ 00:13:29.139 { 00:13:29.139 "name": "BaseBdev1", 00:13:29.139 "uuid": "dc6907fa-67b7-4bb2-8f24-d94a74395953", 00:13:29.139 "is_configured": true, 00:13:29.139 "data_offset": 2048, 00:13:29.139 "data_size": 63488 00:13:29.139 }, 00:13:29.139 { 00:13:29.139 "name": "BaseBdev2", 00:13:29.139 "uuid": "5fc9cf67-12df-4c01-b177-949bf1785e7d", 00:13:29.139 "is_configured": true, 00:13:29.139 "data_offset": 2048, 00:13:29.139 "data_size": 63488 00:13:29.139 }, 00:13:29.139 { 00:13:29.139 "name": "BaseBdev3", 00:13:29.139 "uuid": "f9851f8a-6be4-4e87-871f-4b1d98a6e55a", 00:13:29.139 "is_configured": true, 00:13:29.139 "data_offset": 2048, 00:13:29.139 "data_size": 63488 00:13:29.139 } 00:13:29.139 ] 00:13:29.139 } 00:13:29.139 } 00:13:29.139 }' 00:13:29.139 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:29.139 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:29.139 BaseBdev2 00:13:29.139 BaseBdev3' 00:13:29.139 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.139 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:29.139 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.139 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.139 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:29.139 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.139 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.139 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.397 [2024-11-20 07:10:26.618355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.397 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.398 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.398 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.398 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.398 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.398 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.656 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.656 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.656 "name": "Existed_Raid", 00:13:29.656 "uuid": "17899ffb-0918-4734-939f-5e5d44d9f96b", 00:13:29.656 "strip_size_kb": 0, 00:13:29.656 "state": "online", 00:13:29.656 "raid_level": "raid1", 00:13:29.656 "superblock": true, 00:13:29.656 "num_base_bdevs": 3, 00:13:29.656 "num_base_bdevs_discovered": 2, 00:13:29.656 "num_base_bdevs_operational": 2, 00:13:29.656 "base_bdevs_list": [ 00:13:29.656 { 00:13:29.656 "name": null, 00:13:29.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.656 "is_configured": false, 00:13:29.656 "data_offset": 0, 00:13:29.656 "data_size": 63488 00:13:29.656 }, 00:13:29.656 { 00:13:29.656 "name": "BaseBdev2", 00:13:29.656 "uuid": "5fc9cf67-12df-4c01-b177-949bf1785e7d", 00:13:29.656 "is_configured": true, 00:13:29.656 "data_offset": 2048, 00:13:29.656 "data_size": 63488 00:13:29.656 }, 00:13:29.656 { 00:13:29.656 "name": "BaseBdev3", 00:13:29.656 "uuid": "f9851f8a-6be4-4e87-871f-4b1d98a6e55a", 00:13:29.656 "is_configured": true, 00:13:29.656 "data_offset": 2048, 00:13:29.656 "data_size": 63488 00:13:29.656 } 00:13:29.656 ] 00:13:29.656 }' 00:13:29.656 07:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.656 07:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.915 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:29.915 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:29.915 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.173 [2024-11-20 07:10:27.287535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:30.173 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.174 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.174 [2024-11-20 07:10:27.435293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:30.174 [2024-11-20 07:10:27.435587] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.433 [2024-11-20 07:10:27.522458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.433 [2024-11-20 07:10:27.522531] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.433 [2024-11-20 07:10:27.522550] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.433 BaseBdev2 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.433 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.433 [ 00:13:30.433 { 00:13:30.433 "name": "BaseBdev2", 00:13:30.433 "aliases": [ 00:13:30.433 "6bd4f1da-abb1-49b1-b881-e7d380a88afd" 00:13:30.433 ], 00:13:30.433 "product_name": "Malloc disk", 00:13:30.433 "block_size": 512, 00:13:30.433 "num_blocks": 65536, 00:13:30.433 "uuid": "6bd4f1da-abb1-49b1-b881-e7d380a88afd", 00:13:30.433 "assigned_rate_limits": { 00:13:30.433 "rw_ios_per_sec": 0, 00:13:30.433 "rw_mbytes_per_sec": 0, 00:13:30.433 "r_mbytes_per_sec": 0, 00:13:30.433 "w_mbytes_per_sec": 0 00:13:30.433 }, 00:13:30.433 "claimed": false, 00:13:30.433 "zoned": false, 00:13:30.433 "supported_io_types": { 00:13:30.433 "read": true, 00:13:30.433 "write": true, 00:13:30.433 "unmap": true, 00:13:30.433 "flush": true, 00:13:30.434 "reset": true, 00:13:30.434 "nvme_admin": false, 00:13:30.434 "nvme_io": false, 00:13:30.434 "nvme_io_md": false, 00:13:30.434 "write_zeroes": true, 00:13:30.434 "zcopy": true, 00:13:30.434 "get_zone_info": false, 00:13:30.434 "zone_management": false, 00:13:30.434 "zone_append": false, 00:13:30.434 "compare": false, 00:13:30.434 "compare_and_write": false, 00:13:30.434 "abort": true, 00:13:30.434 "seek_hole": false, 00:13:30.434 "seek_data": false, 00:13:30.434 "copy": true, 00:13:30.434 "nvme_iov_md": false 00:13:30.434 }, 00:13:30.434 "memory_domains": [ 00:13:30.434 { 00:13:30.434 "dma_device_id": "system", 00:13:30.434 "dma_device_type": 1 00:13:30.434 }, 00:13:30.434 { 00:13:30.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.434 "dma_device_type": 2 00:13:30.434 } 00:13:30.434 ], 00:13:30.434 "driver_specific": {} 00:13:30.434 } 00:13:30.434 ] 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.434 BaseBdev3 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.434 [ 00:13:30.434 { 00:13:30.434 "name": "BaseBdev3", 00:13:30.434 "aliases": [ 00:13:30.434 "fc18d45e-ece3-465f-8369-bc94007c0a91" 00:13:30.434 ], 00:13:30.434 "product_name": "Malloc disk", 00:13:30.434 "block_size": 512, 00:13:30.434 "num_blocks": 65536, 00:13:30.434 "uuid": "fc18d45e-ece3-465f-8369-bc94007c0a91", 00:13:30.434 "assigned_rate_limits": { 00:13:30.434 "rw_ios_per_sec": 0, 00:13:30.434 "rw_mbytes_per_sec": 0, 00:13:30.434 "r_mbytes_per_sec": 0, 00:13:30.434 "w_mbytes_per_sec": 0 00:13:30.434 }, 00:13:30.434 "claimed": false, 00:13:30.434 "zoned": false, 00:13:30.434 "supported_io_types": { 00:13:30.434 "read": true, 00:13:30.434 "write": true, 00:13:30.434 "unmap": true, 00:13:30.434 "flush": true, 00:13:30.434 "reset": true, 00:13:30.434 "nvme_admin": false, 00:13:30.434 "nvme_io": false, 00:13:30.434 "nvme_io_md": false, 00:13:30.434 "write_zeroes": true, 00:13:30.434 "zcopy": true, 00:13:30.434 "get_zone_info": false, 00:13:30.434 "zone_management": false, 00:13:30.434 "zone_append": false, 00:13:30.434 "compare": false, 00:13:30.434 "compare_and_write": false, 00:13:30.434 "abort": true, 00:13:30.434 "seek_hole": false, 00:13:30.434 "seek_data": false, 00:13:30.434 "copy": true, 00:13:30.434 "nvme_iov_md": false 00:13:30.434 }, 00:13:30.434 "memory_domains": [ 00:13:30.434 { 00:13:30.434 "dma_device_id": "system", 00:13:30.434 "dma_device_type": 1 00:13:30.434 }, 00:13:30.434 { 00:13:30.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.434 "dma_device_type": 2 00:13:30.434 } 00:13:30.434 ], 00:13:30.434 "driver_specific": {} 00:13:30.434 } 00:13:30.434 ] 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.434 [2024-11-20 07:10:27.725435] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:30.434 [2024-11-20 07:10:27.725654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:30.434 [2024-11-20 07:10:27.725807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:30.434 [2024-11-20 07:10:27.728374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.434 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.693 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.693 "name": "Existed_Raid", 00:13:30.693 "uuid": "a3b20cb2-ba72-4971-a236-8eb7b1d47e91", 00:13:30.693 "strip_size_kb": 0, 00:13:30.693 "state": "configuring", 00:13:30.693 "raid_level": "raid1", 00:13:30.693 "superblock": true, 00:13:30.693 "num_base_bdevs": 3, 00:13:30.693 "num_base_bdevs_discovered": 2, 00:13:30.693 "num_base_bdevs_operational": 3, 00:13:30.693 "base_bdevs_list": [ 00:13:30.693 { 00:13:30.693 "name": "BaseBdev1", 00:13:30.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.693 "is_configured": false, 00:13:30.693 "data_offset": 0, 00:13:30.693 "data_size": 0 00:13:30.693 }, 00:13:30.693 { 00:13:30.693 "name": "BaseBdev2", 00:13:30.693 "uuid": "6bd4f1da-abb1-49b1-b881-e7d380a88afd", 00:13:30.693 "is_configured": true, 00:13:30.693 "data_offset": 2048, 00:13:30.693 "data_size": 63488 00:13:30.693 }, 00:13:30.693 { 00:13:30.693 "name": "BaseBdev3", 00:13:30.693 "uuid": "fc18d45e-ece3-465f-8369-bc94007c0a91", 00:13:30.693 "is_configured": true, 00:13:30.693 "data_offset": 2048, 00:13:30.693 "data_size": 63488 00:13:30.693 } 00:13:30.693 ] 00:13:30.693 }' 00:13:30.693 07:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.693 07:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.953 [2024-11-20 07:10:28.245577] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.953 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.212 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.212 "name": "Existed_Raid", 00:13:31.212 "uuid": "a3b20cb2-ba72-4971-a236-8eb7b1d47e91", 00:13:31.212 "strip_size_kb": 0, 00:13:31.212 "state": "configuring", 00:13:31.212 "raid_level": "raid1", 00:13:31.212 "superblock": true, 00:13:31.212 "num_base_bdevs": 3, 00:13:31.212 "num_base_bdevs_discovered": 1, 00:13:31.212 "num_base_bdevs_operational": 3, 00:13:31.212 "base_bdevs_list": [ 00:13:31.212 { 00:13:31.212 "name": "BaseBdev1", 00:13:31.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.212 "is_configured": false, 00:13:31.212 "data_offset": 0, 00:13:31.212 "data_size": 0 00:13:31.212 }, 00:13:31.212 { 00:13:31.212 "name": null, 00:13:31.212 "uuid": "6bd4f1da-abb1-49b1-b881-e7d380a88afd", 00:13:31.212 "is_configured": false, 00:13:31.212 "data_offset": 0, 00:13:31.212 "data_size": 63488 00:13:31.212 }, 00:13:31.212 { 00:13:31.212 "name": "BaseBdev3", 00:13:31.212 "uuid": "fc18d45e-ece3-465f-8369-bc94007c0a91", 00:13:31.212 "is_configured": true, 00:13:31.212 "data_offset": 2048, 00:13:31.212 "data_size": 63488 00:13:31.212 } 00:13:31.212 ] 00:13:31.212 }' 00:13:31.212 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.212 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.781 [2024-11-20 07:10:28.880152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.781 BaseBdev1 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.781 [ 00:13:31.781 { 00:13:31.781 "name": "BaseBdev1", 00:13:31.781 "aliases": [ 00:13:31.781 "e06ff957-a8e6-465d-b118-6961fe1ab4d0" 00:13:31.781 ], 00:13:31.781 "product_name": "Malloc disk", 00:13:31.781 "block_size": 512, 00:13:31.781 "num_blocks": 65536, 00:13:31.781 "uuid": "e06ff957-a8e6-465d-b118-6961fe1ab4d0", 00:13:31.781 "assigned_rate_limits": { 00:13:31.781 "rw_ios_per_sec": 0, 00:13:31.781 "rw_mbytes_per_sec": 0, 00:13:31.781 "r_mbytes_per_sec": 0, 00:13:31.781 "w_mbytes_per_sec": 0 00:13:31.781 }, 00:13:31.781 "claimed": true, 00:13:31.781 "claim_type": "exclusive_write", 00:13:31.781 "zoned": false, 00:13:31.781 "supported_io_types": { 00:13:31.781 "read": true, 00:13:31.781 "write": true, 00:13:31.781 "unmap": true, 00:13:31.781 "flush": true, 00:13:31.781 "reset": true, 00:13:31.781 "nvme_admin": false, 00:13:31.781 "nvme_io": false, 00:13:31.781 "nvme_io_md": false, 00:13:31.781 "write_zeroes": true, 00:13:31.781 "zcopy": true, 00:13:31.781 "get_zone_info": false, 00:13:31.781 "zone_management": false, 00:13:31.781 "zone_append": false, 00:13:31.781 "compare": false, 00:13:31.781 "compare_and_write": false, 00:13:31.781 "abort": true, 00:13:31.781 "seek_hole": false, 00:13:31.781 "seek_data": false, 00:13:31.781 "copy": true, 00:13:31.781 "nvme_iov_md": false 00:13:31.781 }, 00:13:31.781 "memory_domains": [ 00:13:31.781 { 00:13:31.781 "dma_device_id": "system", 00:13:31.781 "dma_device_type": 1 00:13:31.781 }, 00:13:31.781 { 00:13:31.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.781 "dma_device_type": 2 00:13:31.781 } 00:13:31.781 ], 00:13:31.781 "driver_specific": {} 00:13:31.781 } 00:13:31.781 ] 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.781 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.782 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.782 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.782 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.782 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.782 "name": "Existed_Raid", 00:13:31.782 "uuid": "a3b20cb2-ba72-4971-a236-8eb7b1d47e91", 00:13:31.782 "strip_size_kb": 0, 00:13:31.782 "state": "configuring", 00:13:31.782 "raid_level": "raid1", 00:13:31.782 "superblock": true, 00:13:31.782 "num_base_bdevs": 3, 00:13:31.782 "num_base_bdevs_discovered": 2, 00:13:31.782 "num_base_bdevs_operational": 3, 00:13:31.782 "base_bdevs_list": [ 00:13:31.782 { 00:13:31.782 "name": "BaseBdev1", 00:13:31.782 "uuid": "e06ff957-a8e6-465d-b118-6961fe1ab4d0", 00:13:31.782 "is_configured": true, 00:13:31.782 "data_offset": 2048, 00:13:31.782 "data_size": 63488 00:13:31.782 }, 00:13:31.782 { 00:13:31.782 "name": null, 00:13:31.782 "uuid": "6bd4f1da-abb1-49b1-b881-e7d380a88afd", 00:13:31.782 "is_configured": false, 00:13:31.782 "data_offset": 0, 00:13:31.782 "data_size": 63488 00:13:31.782 }, 00:13:31.782 { 00:13:31.782 "name": "BaseBdev3", 00:13:31.782 "uuid": "fc18d45e-ece3-465f-8369-bc94007c0a91", 00:13:31.782 "is_configured": true, 00:13:31.782 "data_offset": 2048, 00:13:31.782 "data_size": 63488 00:13:31.782 } 00:13:31.782 ] 00:13:31.782 }' 00:13:31.782 07:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.782 07:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.351 [2024-11-20 07:10:29.480393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.351 "name": "Existed_Raid", 00:13:32.351 "uuid": "a3b20cb2-ba72-4971-a236-8eb7b1d47e91", 00:13:32.351 "strip_size_kb": 0, 00:13:32.351 "state": "configuring", 00:13:32.351 "raid_level": "raid1", 00:13:32.351 "superblock": true, 00:13:32.351 "num_base_bdevs": 3, 00:13:32.351 "num_base_bdevs_discovered": 1, 00:13:32.351 "num_base_bdevs_operational": 3, 00:13:32.351 "base_bdevs_list": [ 00:13:32.351 { 00:13:32.351 "name": "BaseBdev1", 00:13:32.351 "uuid": "e06ff957-a8e6-465d-b118-6961fe1ab4d0", 00:13:32.351 "is_configured": true, 00:13:32.351 "data_offset": 2048, 00:13:32.351 "data_size": 63488 00:13:32.351 }, 00:13:32.351 { 00:13:32.351 "name": null, 00:13:32.351 "uuid": "6bd4f1da-abb1-49b1-b881-e7d380a88afd", 00:13:32.351 "is_configured": false, 00:13:32.351 "data_offset": 0, 00:13:32.351 "data_size": 63488 00:13:32.351 }, 00:13:32.351 { 00:13:32.351 "name": null, 00:13:32.351 "uuid": "fc18d45e-ece3-465f-8369-bc94007c0a91", 00:13:32.351 "is_configured": false, 00:13:32.351 "data_offset": 0, 00:13:32.351 "data_size": 63488 00:13:32.351 } 00:13:32.351 ] 00:13:32.351 }' 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.351 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.919 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:32.919 07:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.919 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.919 07:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.919 [2024-11-20 07:10:30.048613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.919 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.920 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.920 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.920 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.920 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.920 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.920 "name": "Existed_Raid", 00:13:32.920 "uuid": "a3b20cb2-ba72-4971-a236-8eb7b1d47e91", 00:13:32.920 "strip_size_kb": 0, 00:13:32.920 "state": "configuring", 00:13:32.920 "raid_level": "raid1", 00:13:32.920 "superblock": true, 00:13:32.920 "num_base_bdevs": 3, 00:13:32.920 "num_base_bdevs_discovered": 2, 00:13:32.920 "num_base_bdevs_operational": 3, 00:13:32.920 "base_bdevs_list": [ 00:13:32.920 { 00:13:32.920 "name": "BaseBdev1", 00:13:32.920 "uuid": "e06ff957-a8e6-465d-b118-6961fe1ab4d0", 00:13:32.920 "is_configured": true, 00:13:32.920 "data_offset": 2048, 00:13:32.920 "data_size": 63488 00:13:32.920 }, 00:13:32.920 { 00:13:32.920 "name": null, 00:13:32.920 "uuid": "6bd4f1da-abb1-49b1-b881-e7d380a88afd", 00:13:32.920 "is_configured": false, 00:13:32.920 "data_offset": 0, 00:13:32.920 "data_size": 63488 00:13:32.920 }, 00:13:32.920 { 00:13:32.920 "name": "BaseBdev3", 00:13:32.920 "uuid": "fc18d45e-ece3-465f-8369-bc94007c0a91", 00:13:32.920 "is_configured": true, 00:13:32.920 "data_offset": 2048, 00:13:32.920 "data_size": 63488 00:13:32.920 } 00:13:32.920 ] 00:13:32.920 }' 00:13:32.920 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.920 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.488 [2024-11-20 07:10:30.644850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.488 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.488 "name": "Existed_Raid", 00:13:33.488 "uuid": "a3b20cb2-ba72-4971-a236-8eb7b1d47e91", 00:13:33.488 "strip_size_kb": 0, 00:13:33.488 "state": "configuring", 00:13:33.488 "raid_level": "raid1", 00:13:33.488 "superblock": true, 00:13:33.488 "num_base_bdevs": 3, 00:13:33.488 "num_base_bdevs_discovered": 1, 00:13:33.488 "num_base_bdevs_operational": 3, 00:13:33.488 "base_bdevs_list": [ 00:13:33.488 { 00:13:33.488 "name": null, 00:13:33.488 "uuid": "e06ff957-a8e6-465d-b118-6961fe1ab4d0", 00:13:33.488 "is_configured": false, 00:13:33.488 "data_offset": 0, 00:13:33.488 "data_size": 63488 00:13:33.488 }, 00:13:33.488 { 00:13:33.488 "name": null, 00:13:33.488 "uuid": "6bd4f1da-abb1-49b1-b881-e7d380a88afd", 00:13:33.488 "is_configured": false, 00:13:33.488 "data_offset": 0, 00:13:33.488 "data_size": 63488 00:13:33.488 }, 00:13:33.488 { 00:13:33.488 "name": "BaseBdev3", 00:13:33.488 "uuid": "fc18d45e-ece3-465f-8369-bc94007c0a91", 00:13:33.488 "is_configured": true, 00:13:33.488 "data_offset": 2048, 00:13:33.488 "data_size": 63488 00:13:33.488 } 00:13:33.488 ] 00:13:33.488 }' 00:13:33.489 07:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.489 07:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.057 [2024-11-20 07:10:31.299842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.057 "name": "Existed_Raid", 00:13:34.057 "uuid": "a3b20cb2-ba72-4971-a236-8eb7b1d47e91", 00:13:34.057 "strip_size_kb": 0, 00:13:34.057 "state": "configuring", 00:13:34.057 "raid_level": "raid1", 00:13:34.057 "superblock": true, 00:13:34.057 "num_base_bdevs": 3, 00:13:34.057 "num_base_bdevs_discovered": 2, 00:13:34.057 "num_base_bdevs_operational": 3, 00:13:34.057 "base_bdevs_list": [ 00:13:34.057 { 00:13:34.057 "name": null, 00:13:34.057 "uuid": "e06ff957-a8e6-465d-b118-6961fe1ab4d0", 00:13:34.057 "is_configured": false, 00:13:34.057 "data_offset": 0, 00:13:34.057 "data_size": 63488 00:13:34.057 }, 00:13:34.057 { 00:13:34.057 "name": "BaseBdev2", 00:13:34.057 "uuid": "6bd4f1da-abb1-49b1-b881-e7d380a88afd", 00:13:34.057 "is_configured": true, 00:13:34.057 "data_offset": 2048, 00:13:34.057 "data_size": 63488 00:13:34.057 }, 00:13:34.057 { 00:13:34.057 "name": "BaseBdev3", 00:13:34.057 "uuid": "fc18d45e-ece3-465f-8369-bc94007c0a91", 00:13:34.057 "is_configured": true, 00:13:34.057 "data_offset": 2048, 00:13:34.057 "data_size": 63488 00:13:34.057 } 00:13:34.057 ] 00:13:34.057 }' 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.057 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e06ff957-a8e6-465d-b118-6961fe1ab4d0 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.623 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.880 [2024-11-20 07:10:31.958108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:34.880 [2024-11-20 07:10:31.958639] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:34.880 [2024-11-20 07:10:31.958664] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:34.880 [2024-11-20 07:10:31.958988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:34.880 NewBaseBdev 00:13:34.880 [2024-11-20 07:10:31.959208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:34.880 [2024-11-20 07:10:31.959231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:34.880 [2024-11-20 07:10:31.959388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.880 [ 00:13:34.880 { 00:13:34.880 "name": "NewBaseBdev", 00:13:34.880 "aliases": [ 00:13:34.880 "e06ff957-a8e6-465d-b118-6961fe1ab4d0" 00:13:34.880 ], 00:13:34.880 "product_name": "Malloc disk", 00:13:34.880 "block_size": 512, 00:13:34.880 "num_blocks": 65536, 00:13:34.880 "uuid": "e06ff957-a8e6-465d-b118-6961fe1ab4d0", 00:13:34.880 "assigned_rate_limits": { 00:13:34.880 "rw_ios_per_sec": 0, 00:13:34.880 "rw_mbytes_per_sec": 0, 00:13:34.880 "r_mbytes_per_sec": 0, 00:13:34.880 "w_mbytes_per_sec": 0 00:13:34.880 }, 00:13:34.880 "claimed": true, 00:13:34.880 "claim_type": "exclusive_write", 00:13:34.880 "zoned": false, 00:13:34.880 "supported_io_types": { 00:13:34.880 "read": true, 00:13:34.880 "write": true, 00:13:34.880 "unmap": true, 00:13:34.880 "flush": true, 00:13:34.880 "reset": true, 00:13:34.880 "nvme_admin": false, 00:13:34.880 "nvme_io": false, 00:13:34.880 "nvme_io_md": false, 00:13:34.880 "write_zeroes": true, 00:13:34.880 "zcopy": true, 00:13:34.880 "get_zone_info": false, 00:13:34.880 "zone_management": false, 00:13:34.880 "zone_append": false, 00:13:34.880 "compare": false, 00:13:34.880 "compare_and_write": false, 00:13:34.880 "abort": true, 00:13:34.880 "seek_hole": false, 00:13:34.880 "seek_data": false, 00:13:34.880 "copy": true, 00:13:34.880 "nvme_iov_md": false 00:13:34.880 }, 00:13:34.880 "memory_domains": [ 00:13:34.880 { 00:13:34.880 "dma_device_id": "system", 00:13:34.880 "dma_device_type": 1 00:13:34.880 }, 00:13:34.880 { 00:13:34.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.880 "dma_device_type": 2 00:13:34.880 } 00:13:34.880 ], 00:13:34.880 "driver_specific": {} 00:13:34.880 } 00:13:34.880 ] 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.880 07:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.880 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.880 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.880 "name": "Existed_Raid", 00:13:34.880 "uuid": "a3b20cb2-ba72-4971-a236-8eb7b1d47e91", 00:13:34.880 "strip_size_kb": 0, 00:13:34.880 "state": "online", 00:13:34.880 "raid_level": "raid1", 00:13:34.880 "superblock": true, 00:13:34.880 "num_base_bdevs": 3, 00:13:34.880 "num_base_bdevs_discovered": 3, 00:13:34.880 "num_base_bdevs_operational": 3, 00:13:34.880 "base_bdevs_list": [ 00:13:34.880 { 00:13:34.880 "name": "NewBaseBdev", 00:13:34.880 "uuid": "e06ff957-a8e6-465d-b118-6961fe1ab4d0", 00:13:34.880 "is_configured": true, 00:13:34.880 "data_offset": 2048, 00:13:34.880 "data_size": 63488 00:13:34.880 }, 00:13:34.880 { 00:13:34.880 "name": "BaseBdev2", 00:13:34.880 "uuid": "6bd4f1da-abb1-49b1-b881-e7d380a88afd", 00:13:34.880 "is_configured": true, 00:13:34.880 "data_offset": 2048, 00:13:34.880 "data_size": 63488 00:13:34.880 }, 00:13:34.880 { 00:13:34.880 "name": "BaseBdev3", 00:13:34.880 "uuid": "fc18d45e-ece3-465f-8369-bc94007c0a91", 00:13:34.880 "is_configured": true, 00:13:34.880 "data_offset": 2048, 00:13:34.880 "data_size": 63488 00:13:34.880 } 00:13:34.880 ] 00:13:34.880 }' 00:13:34.880 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.880 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.444 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:35.444 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:35.444 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:35.444 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:35.444 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:35.444 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:35.444 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:35.444 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.444 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.444 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:35.444 [2024-11-20 07:10:32.510703] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.444 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.444 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:35.444 "name": "Existed_Raid", 00:13:35.444 "aliases": [ 00:13:35.444 "a3b20cb2-ba72-4971-a236-8eb7b1d47e91" 00:13:35.444 ], 00:13:35.444 "product_name": "Raid Volume", 00:13:35.444 "block_size": 512, 00:13:35.444 "num_blocks": 63488, 00:13:35.444 "uuid": "a3b20cb2-ba72-4971-a236-8eb7b1d47e91", 00:13:35.444 "assigned_rate_limits": { 00:13:35.444 "rw_ios_per_sec": 0, 00:13:35.444 "rw_mbytes_per_sec": 0, 00:13:35.444 "r_mbytes_per_sec": 0, 00:13:35.444 "w_mbytes_per_sec": 0 00:13:35.444 }, 00:13:35.444 "claimed": false, 00:13:35.444 "zoned": false, 00:13:35.444 "supported_io_types": { 00:13:35.444 "read": true, 00:13:35.444 "write": true, 00:13:35.444 "unmap": false, 00:13:35.444 "flush": false, 00:13:35.444 "reset": true, 00:13:35.444 "nvme_admin": false, 00:13:35.444 "nvme_io": false, 00:13:35.444 "nvme_io_md": false, 00:13:35.444 "write_zeroes": true, 00:13:35.444 "zcopy": false, 00:13:35.444 "get_zone_info": false, 00:13:35.444 "zone_management": false, 00:13:35.444 "zone_append": false, 00:13:35.444 "compare": false, 00:13:35.444 "compare_and_write": false, 00:13:35.444 "abort": false, 00:13:35.444 "seek_hole": false, 00:13:35.444 "seek_data": false, 00:13:35.444 "copy": false, 00:13:35.444 "nvme_iov_md": false 00:13:35.444 }, 00:13:35.444 "memory_domains": [ 00:13:35.444 { 00:13:35.444 "dma_device_id": "system", 00:13:35.444 "dma_device_type": 1 00:13:35.444 }, 00:13:35.444 { 00:13:35.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.444 "dma_device_type": 2 00:13:35.444 }, 00:13:35.444 { 00:13:35.444 "dma_device_id": "system", 00:13:35.444 "dma_device_type": 1 00:13:35.444 }, 00:13:35.444 { 00:13:35.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.444 "dma_device_type": 2 00:13:35.444 }, 00:13:35.444 { 00:13:35.444 "dma_device_id": "system", 00:13:35.444 "dma_device_type": 1 00:13:35.444 }, 00:13:35.444 { 00:13:35.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.444 "dma_device_type": 2 00:13:35.444 } 00:13:35.444 ], 00:13:35.444 "driver_specific": { 00:13:35.444 "raid": { 00:13:35.444 "uuid": "a3b20cb2-ba72-4971-a236-8eb7b1d47e91", 00:13:35.444 "strip_size_kb": 0, 00:13:35.444 "state": "online", 00:13:35.444 "raid_level": "raid1", 00:13:35.444 "superblock": true, 00:13:35.444 "num_base_bdevs": 3, 00:13:35.444 "num_base_bdevs_discovered": 3, 00:13:35.444 "num_base_bdevs_operational": 3, 00:13:35.444 "base_bdevs_list": [ 00:13:35.444 { 00:13:35.444 "name": "NewBaseBdev", 00:13:35.444 "uuid": "e06ff957-a8e6-465d-b118-6961fe1ab4d0", 00:13:35.444 "is_configured": true, 00:13:35.444 "data_offset": 2048, 00:13:35.444 "data_size": 63488 00:13:35.444 }, 00:13:35.444 { 00:13:35.444 "name": "BaseBdev2", 00:13:35.444 "uuid": "6bd4f1da-abb1-49b1-b881-e7d380a88afd", 00:13:35.444 "is_configured": true, 00:13:35.444 "data_offset": 2048, 00:13:35.444 "data_size": 63488 00:13:35.444 }, 00:13:35.445 { 00:13:35.445 "name": "BaseBdev3", 00:13:35.445 "uuid": "fc18d45e-ece3-465f-8369-bc94007c0a91", 00:13:35.445 "is_configured": true, 00:13:35.445 "data_offset": 2048, 00:13:35.445 "data_size": 63488 00:13:35.445 } 00:13:35.445 ] 00:13:35.445 } 00:13:35.445 } 00:13:35.445 }' 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:35.445 BaseBdev2 00:13:35.445 BaseBdev3' 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.445 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.703 [2024-11-20 07:10:32.862403] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:35.703 [2024-11-20 07:10:32.862610] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:35.703 [2024-11-20 07:10:32.862844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.703 [2024-11-20 07:10:32.863248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.703 [2024-11-20 07:10:32.863283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68007 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68007 ']' 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68007 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68007 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68007' 00:13:35.703 killing process with pid 68007 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68007 00:13:35.703 [2024-11-20 07:10:32.903095] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:35.703 07:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68007 00:13:35.961 [2024-11-20 07:10:33.170812] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.336 07:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:37.336 00:13:37.336 real 0m11.953s 00:13:37.336 user 0m19.885s 00:13:37.336 sys 0m1.565s 00:13:37.336 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.336 07:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.336 ************************************ 00:13:37.336 END TEST raid_state_function_test_sb 00:13:37.336 ************************************ 00:13:37.336 07:10:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:13:37.336 07:10:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:37.336 07:10:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.336 07:10:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.336 ************************************ 00:13:37.336 START TEST raid_superblock_test 00:13:37.337 ************************************ 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68638 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68638 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68638 ']' 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.337 07:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.337 [2024-11-20 07:10:34.407487] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:13:37.337 [2024-11-20 07:10:34.407656] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68638 ] 00:13:37.337 [2024-11-20 07:10:34.581497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.595 [2024-11-20 07:10:34.715242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.854 [2024-11-20 07:10:34.924183] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:37.854 [2024-11-20 07:10:34.924262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.113 malloc1 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.113 [2024-11-20 07:10:35.415508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:38.113 [2024-11-20 07:10:35.415845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.113 [2024-11-20 07:10:35.416031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:38.113 [2024-11-20 07:10:35.416161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.113 [2024-11-20 07:10:35.418981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.113 [2024-11-20 07:10:35.419168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:38.113 pt1 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.113 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.372 malloc2 00:13:38.372 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.373 [2024-11-20 07:10:35.463845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:38.373 [2024-11-20 07:10:35.464072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.373 [2024-11-20 07:10:35.464239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:38.373 [2024-11-20 07:10:35.464266] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.373 [2024-11-20 07:10:35.467111] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.373 [2024-11-20 07:10:35.467166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:38.373 pt2 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.373 malloc3 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.373 [2024-11-20 07:10:35.529044] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:38.373 [2024-11-20 07:10:35.529124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:38.373 [2024-11-20 07:10:35.529157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:38.373 [2024-11-20 07:10:35.529173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:38.373 [2024-11-20 07:10:35.532032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:38.373 [2024-11-20 07:10:35.532080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:38.373 pt3 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.373 [2024-11-20 07:10:35.537093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:38.373 [2024-11-20 07:10:35.539741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:38.373 [2024-11-20 07:10:35.539991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:38.373 [2024-11-20 07:10:35.540332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:38.373 [2024-11-20 07:10:35.540469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:38.373 [2024-11-20 07:10:35.540845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:38.373 [2024-11-20 07:10:35.541206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:38.373 [2024-11-20 07:10:35.541336] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:38.373 [2024-11-20 07:10:35.541692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.373 "name": "raid_bdev1", 00:13:38.373 "uuid": "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e", 00:13:38.373 "strip_size_kb": 0, 00:13:38.373 "state": "online", 00:13:38.373 "raid_level": "raid1", 00:13:38.373 "superblock": true, 00:13:38.373 "num_base_bdevs": 3, 00:13:38.373 "num_base_bdevs_discovered": 3, 00:13:38.373 "num_base_bdevs_operational": 3, 00:13:38.373 "base_bdevs_list": [ 00:13:38.373 { 00:13:38.373 "name": "pt1", 00:13:38.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.373 "is_configured": true, 00:13:38.373 "data_offset": 2048, 00:13:38.373 "data_size": 63488 00:13:38.373 }, 00:13:38.373 { 00:13:38.373 "name": "pt2", 00:13:38.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.373 "is_configured": true, 00:13:38.373 "data_offset": 2048, 00:13:38.373 "data_size": 63488 00:13:38.373 }, 00:13:38.373 { 00:13:38.373 "name": "pt3", 00:13:38.373 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.373 "is_configured": true, 00:13:38.373 "data_offset": 2048, 00:13:38.373 "data_size": 63488 00:13:38.373 } 00:13:38.373 ] 00:13:38.373 }' 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.373 07:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.940 [2024-11-20 07:10:36.050215] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:38.940 "name": "raid_bdev1", 00:13:38.940 "aliases": [ 00:13:38.940 "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e" 00:13:38.940 ], 00:13:38.940 "product_name": "Raid Volume", 00:13:38.940 "block_size": 512, 00:13:38.940 "num_blocks": 63488, 00:13:38.940 "uuid": "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e", 00:13:38.940 "assigned_rate_limits": { 00:13:38.940 "rw_ios_per_sec": 0, 00:13:38.940 "rw_mbytes_per_sec": 0, 00:13:38.940 "r_mbytes_per_sec": 0, 00:13:38.940 "w_mbytes_per_sec": 0 00:13:38.940 }, 00:13:38.940 "claimed": false, 00:13:38.940 "zoned": false, 00:13:38.940 "supported_io_types": { 00:13:38.940 "read": true, 00:13:38.940 "write": true, 00:13:38.940 "unmap": false, 00:13:38.940 "flush": false, 00:13:38.940 "reset": true, 00:13:38.940 "nvme_admin": false, 00:13:38.940 "nvme_io": false, 00:13:38.940 "nvme_io_md": false, 00:13:38.940 "write_zeroes": true, 00:13:38.940 "zcopy": false, 00:13:38.940 "get_zone_info": false, 00:13:38.940 "zone_management": false, 00:13:38.940 "zone_append": false, 00:13:38.940 "compare": false, 00:13:38.940 "compare_and_write": false, 00:13:38.940 "abort": false, 00:13:38.940 "seek_hole": false, 00:13:38.940 "seek_data": false, 00:13:38.940 "copy": false, 00:13:38.940 "nvme_iov_md": false 00:13:38.940 }, 00:13:38.940 "memory_domains": [ 00:13:38.940 { 00:13:38.940 "dma_device_id": "system", 00:13:38.940 "dma_device_type": 1 00:13:38.940 }, 00:13:38.940 { 00:13:38.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.940 "dma_device_type": 2 00:13:38.940 }, 00:13:38.940 { 00:13:38.940 "dma_device_id": "system", 00:13:38.940 "dma_device_type": 1 00:13:38.940 }, 00:13:38.940 { 00:13:38.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.940 "dma_device_type": 2 00:13:38.940 }, 00:13:38.940 { 00:13:38.940 "dma_device_id": "system", 00:13:38.940 "dma_device_type": 1 00:13:38.940 }, 00:13:38.940 { 00:13:38.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.940 "dma_device_type": 2 00:13:38.940 } 00:13:38.940 ], 00:13:38.940 "driver_specific": { 00:13:38.940 "raid": { 00:13:38.940 "uuid": "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e", 00:13:38.940 "strip_size_kb": 0, 00:13:38.940 "state": "online", 00:13:38.940 "raid_level": "raid1", 00:13:38.940 "superblock": true, 00:13:38.940 "num_base_bdevs": 3, 00:13:38.940 "num_base_bdevs_discovered": 3, 00:13:38.940 "num_base_bdevs_operational": 3, 00:13:38.940 "base_bdevs_list": [ 00:13:38.940 { 00:13:38.940 "name": "pt1", 00:13:38.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:38.940 "is_configured": true, 00:13:38.940 "data_offset": 2048, 00:13:38.940 "data_size": 63488 00:13:38.940 }, 00:13:38.940 { 00:13:38.940 "name": "pt2", 00:13:38.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:38.940 "is_configured": true, 00:13:38.940 "data_offset": 2048, 00:13:38.940 "data_size": 63488 00:13:38.940 }, 00:13:38.940 { 00:13:38.940 "name": "pt3", 00:13:38.940 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:38.940 "is_configured": true, 00:13:38.940 "data_offset": 2048, 00:13:38.940 "data_size": 63488 00:13:38.940 } 00:13:38.940 ] 00:13:38.940 } 00:13:38.940 } 00:13:38.940 }' 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:38.940 pt2 00:13:38.940 pt3' 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.940 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.199 [2024-11-20 07:10:36.382207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b9701b4c-5d3f-44b3-b2e8-f368af83ca3e 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b9701b4c-5d3f-44b3-b2e8-f368af83ca3e ']' 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.199 [2024-11-20 07:10:36.425874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:39.199 [2024-11-20 07:10:36.426047] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.199 [2024-11-20 07:10:36.426270] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.199 [2024-11-20 07:10:36.426492] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.199 [2024-11-20 07:10:36.426606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.199 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.458 [2024-11-20 07:10:36.569968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:39.458 [2024-11-20 07:10:36.572505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:39.458 [2024-11-20 07:10:36.572701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:39.458 [2024-11-20 07:10:36.572783] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:39.458 [2024-11-20 07:10:36.572857] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:39.458 [2024-11-20 07:10:36.572908] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:39.458 [2024-11-20 07:10:36.572938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:39.458 [2024-11-20 07:10:36.572951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:39.458 request: 00:13:39.458 { 00:13:39.458 "name": "raid_bdev1", 00:13:39.458 "raid_level": "raid1", 00:13:39.458 "base_bdevs": [ 00:13:39.458 "malloc1", 00:13:39.458 "malloc2", 00:13:39.458 "malloc3" 00:13:39.458 ], 00:13:39.458 "superblock": false, 00:13:39.458 "method": "bdev_raid_create", 00:13:39.458 "req_id": 1 00:13:39.458 } 00:13:39.458 Got JSON-RPC error response 00:13:39.458 response: 00:13:39.458 { 00:13:39.458 "code": -17, 00:13:39.458 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:39.458 } 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:39.458 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.459 [2024-11-20 07:10:36.629933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:39.459 [2024-11-20 07:10:36.630127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.459 [2024-11-20 07:10:36.630206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:39.459 [2024-11-20 07:10:36.630332] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.459 [2024-11-20 07:10:36.633320] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.459 [2024-11-20 07:10:36.633365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:39.459 [2024-11-20 07:10:36.633479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:39.459 [2024-11-20 07:10:36.633543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:39.459 pt1 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.459 "name": "raid_bdev1", 00:13:39.459 "uuid": "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e", 00:13:39.459 "strip_size_kb": 0, 00:13:39.459 "state": "configuring", 00:13:39.459 "raid_level": "raid1", 00:13:39.459 "superblock": true, 00:13:39.459 "num_base_bdevs": 3, 00:13:39.459 "num_base_bdevs_discovered": 1, 00:13:39.459 "num_base_bdevs_operational": 3, 00:13:39.459 "base_bdevs_list": [ 00:13:39.459 { 00:13:39.459 "name": "pt1", 00:13:39.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:39.459 "is_configured": true, 00:13:39.459 "data_offset": 2048, 00:13:39.459 "data_size": 63488 00:13:39.459 }, 00:13:39.459 { 00:13:39.459 "name": null, 00:13:39.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:39.459 "is_configured": false, 00:13:39.459 "data_offset": 2048, 00:13:39.459 "data_size": 63488 00:13:39.459 }, 00:13:39.459 { 00:13:39.459 "name": null, 00:13:39.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:39.459 "is_configured": false, 00:13:39.459 "data_offset": 2048, 00:13:39.459 "data_size": 63488 00:13:39.459 } 00:13:39.459 ] 00:13:39.459 }' 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.459 07:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.026 [2024-11-20 07:10:37.158173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:40.026 [2024-11-20 07:10:37.159336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.026 [2024-11-20 07:10:37.159385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:40.026 [2024-11-20 07:10:37.159402] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.026 [2024-11-20 07:10:37.160010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.026 [2024-11-20 07:10:37.160042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:40.026 [2024-11-20 07:10:37.160149] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:40.026 [2024-11-20 07:10:37.160181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:40.026 pt2 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.026 [2024-11-20 07:10:37.166093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.026 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.027 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.027 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.027 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.027 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.027 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.027 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.027 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.027 "name": "raid_bdev1", 00:13:40.027 "uuid": "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e", 00:13:40.027 "strip_size_kb": 0, 00:13:40.027 "state": "configuring", 00:13:40.027 "raid_level": "raid1", 00:13:40.027 "superblock": true, 00:13:40.027 "num_base_bdevs": 3, 00:13:40.027 "num_base_bdevs_discovered": 1, 00:13:40.027 "num_base_bdevs_operational": 3, 00:13:40.027 "base_bdevs_list": [ 00:13:40.027 { 00:13:40.027 "name": "pt1", 00:13:40.027 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.027 "is_configured": true, 00:13:40.027 "data_offset": 2048, 00:13:40.027 "data_size": 63488 00:13:40.027 }, 00:13:40.027 { 00:13:40.027 "name": null, 00:13:40.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.027 "is_configured": false, 00:13:40.027 "data_offset": 0, 00:13:40.027 "data_size": 63488 00:13:40.027 }, 00:13:40.027 { 00:13:40.027 "name": null, 00:13:40.027 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.027 "is_configured": false, 00:13:40.027 "data_offset": 2048, 00:13:40.027 "data_size": 63488 00:13:40.027 } 00:13:40.027 ] 00:13:40.027 }' 00:13:40.027 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.027 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.676 [2024-11-20 07:10:37.698261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:40.676 [2024-11-20 07:10:37.698491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.676 [2024-11-20 07:10:37.698673] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:40.676 [2024-11-20 07:10:37.698705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.676 [2024-11-20 07:10:37.699318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.676 [2024-11-20 07:10:37.699351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:40.676 [2024-11-20 07:10:37.699451] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:40.676 [2024-11-20 07:10:37.699506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:40.676 pt2 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.676 [2024-11-20 07:10:37.706217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:40.676 [2024-11-20 07:10:37.706404] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.676 [2024-11-20 07:10:37.706595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:40.676 [2024-11-20 07:10:37.706761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.676 [2024-11-20 07:10:37.707342] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.676 [2024-11-20 07:10:37.707503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:40.676 [2024-11-20 07:10:37.707705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:40.676 [2024-11-20 07:10:37.707891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:40.676 [2024-11-20 07:10:37.708160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:40.676 [2024-11-20 07:10:37.708298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:40.676 [2024-11-20 07:10:37.708639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:40.676 pt3 00:13:40.676 [2024-11-20 07:10:37.708999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:40.676 [2024-11-20 07:10:37.709023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:40.676 [2024-11-20 07:10:37.709203] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.676 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.677 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.677 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.677 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.677 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.677 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.677 "name": "raid_bdev1", 00:13:40.677 "uuid": "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e", 00:13:40.677 "strip_size_kb": 0, 00:13:40.677 "state": "online", 00:13:40.677 "raid_level": "raid1", 00:13:40.677 "superblock": true, 00:13:40.677 "num_base_bdevs": 3, 00:13:40.677 "num_base_bdevs_discovered": 3, 00:13:40.677 "num_base_bdevs_operational": 3, 00:13:40.677 "base_bdevs_list": [ 00:13:40.677 { 00:13:40.677 "name": "pt1", 00:13:40.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:40.677 "is_configured": true, 00:13:40.677 "data_offset": 2048, 00:13:40.677 "data_size": 63488 00:13:40.677 }, 00:13:40.677 { 00:13:40.677 "name": "pt2", 00:13:40.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:40.677 "is_configured": true, 00:13:40.677 "data_offset": 2048, 00:13:40.677 "data_size": 63488 00:13:40.677 }, 00:13:40.677 { 00:13:40.677 "name": "pt3", 00:13:40.677 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:40.677 "is_configured": true, 00:13:40.677 "data_offset": 2048, 00:13:40.677 "data_size": 63488 00:13:40.677 } 00:13:40.677 ] 00:13:40.677 }' 00:13:40.677 07:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.677 07:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.245 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:41.245 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:41.245 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:41.245 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:41.245 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:41.245 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:41.245 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:41.245 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.245 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.245 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:41.245 [2024-11-20 07:10:38.266776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.245 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.245 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:41.245 "name": "raid_bdev1", 00:13:41.245 "aliases": [ 00:13:41.246 "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e" 00:13:41.246 ], 00:13:41.246 "product_name": "Raid Volume", 00:13:41.246 "block_size": 512, 00:13:41.246 "num_blocks": 63488, 00:13:41.246 "uuid": "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e", 00:13:41.246 "assigned_rate_limits": { 00:13:41.246 "rw_ios_per_sec": 0, 00:13:41.246 "rw_mbytes_per_sec": 0, 00:13:41.246 "r_mbytes_per_sec": 0, 00:13:41.246 "w_mbytes_per_sec": 0 00:13:41.246 }, 00:13:41.246 "claimed": false, 00:13:41.246 "zoned": false, 00:13:41.246 "supported_io_types": { 00:13:41.246 "read": true, 00:13:41.246 "write": true, 00:13:41.246 "unmap": false, 00:13:41.246 "flush": false, 00:13:41.246 "reset": true, 00:13:41.246 "nvme_admin": false, 00:13:41.246 "nvme_io": false, 00:13:41.246 "nvme_io_md": false, 00:13:41.246 "write_zeroes": true, 00:13:41.246 "zcopy": false, 00:13:41.246 "get_zone_info": false, 00:13:41.246 "zone_management": false, 00:13:41.246 "zone_append": false, 00:13:41.246 "compare": false, 00:13:41.246 "compare_and_write": false, 00:13:41.246 "abort": false, 00:13:41.246 "seek_hole": false, 00:13:41.246 "seek_data": false, 00:13:41.246 "copy": false, 00:13:41.246 "nvme_iov_md": false 00:13:41.246 }, 00:13:41.246 "memory_domains": [ 00:13:41.246 { 00:13:41.246 "dma_device_id": "system", 00:13:41.246 "dma_device_type": 1 00:13:41.246 }, 00:13:41.246 { 00:13:41.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.246 "dma_device_type": 2 00:13:41.246 }, 00:13:41.246 { 00:13:41.246 "dma_device_id": "system", 00:13:41.246 "dma_device_type": 1 00:13:41.246 }, 00:13:41.246 { 00:13:41.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.246 "dma_device_type": 2 00:13:41.246 }, 00:13:41.246 { 00:13:41.246 "dma_device_id": "system", 00:13:41.246 "dma_device_type": 1 00:13:41.246 }, 00:13:41.246 { 00:13:41.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:41.246 "dma_device_type": 2 00:13:41.246 } 00:13:41.246 ], 00:13:41.246 "driver_specific": { 00:13:41.246 "raid": { 00:13:41.246 "uuid": "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e", 00:13:41.246 "strip_size_kb": 0, 00:13:41.246 "state": "online", 00:13:41.246 "raid_level": "raid1", 00:13:41.246 "superblock": true, 00:13:41.246 "num_base_bdevs": 3, 00:13:41.246 "num_base_bdevs_discovered": 3, 00:13:41.246 "num_base_bdevs_operational": 3, 00:13:41.246 "base_bdevs_list": [ 00:13:41.246 { 00:13:41.246 "name": "pt1", 00:13:41.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:41.246 "is_configured": true, 00:13:41.246 "data_offset": 2048, 00:13:41.246 "data_size": 63488 00:13:41.246 }, 00:13:41.246 { 00:13:41.246 "name": "pt2", 00:13:41.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.246 "is_configured": true, 00:13:41.246 "data_offset": 2048, 00:13:41.246 "data_size": 63488 00:13:41.246 }, 00:13:41.246 { 00:13:41.246 "name": "pt3", 00:13:41.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.246 "is_configured": true, 00:13:41.246 "data_offset": 2048, 00:13:41.246 "data_size": 63488 00:13:41.246 } 00:13:41.246 ] 00:13:41.246 } 00:13:41.246 } 00:13:41.246 }' 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:41.246 pt2 00:13:41.246 pt3' 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.246 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.505 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.505 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.505 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:41.505 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:41.505 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.505 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.505 [2024-11-20 07:10:38.590949] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.505 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.505 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b9701b4c-5d3f-44b3-b2e8-f368af83ca3e '!=' b9701b4c-5d3f-44b3-b2e8-f368af83ca3e ']' 00:13:41.505 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.506 [2024-11-20 07:10:38.634636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.506 "name": "raid_bdev1", 00:13:41.506 "uuid": "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e", 00:13:41.506 "strip_size_kb": 0, 00:13:41.506 "state": "online", 00:13:41.506 "raid_level": "raid1", 00:13:41.506 "superblock": true, 00:13:41.506 "num_base_bdevs": 3, 00:13:41.506 "num_base_bdevs_discovered": 2, 00:13:41.506 "num_base_bdevs_operational": 2, 00:13:41.506 "base_bdevs_list": [ 00:13:41.506 { 00:13:41.506 "name": null, 00:13:41.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.506 "is_configured": false, 00:13:41.506 "data_offset": 0, 00:13:41.506 "data_size": 63488 00:13:41.506 }, 00:13:41.506 { 00:13:41.506 "name": "pt2", 00:13:41.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:41.506 "is_configured": true, 00:13:41.506 "data_offset": 2048, 00:13:41.506 "data_size": 63488 00:13:41.506 }, 00:13:41.506 { 00:13:41.506 "name": "pt3", 00:13:41.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:41.506 "is_configured": true, 00:13:41.506 "data_offset": 2048, 00:13:41.506 "data_size": 63488 00:13:41.506 } 00:13:41.506 ] 00:13:41.506 }' 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.506 07:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.073 [2024-11-20 07:10:39.206739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:42.073 [2024-11-20 07:10:39.206775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.073 [2024-11-20 07:10:39.206883] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.073 [2024-11-20 07:10:39.206963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.073 [2024-11-20 07:10:39.206987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.073 [2024-11-20 07:10:39.290703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:42.073 [2024-11-20 07:10:39.290777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.073 [2024-11-20 07:10:39.290803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:42.073 [2024-11-20 07:10:39.290820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.073 [2024-11-20 07:10:39.293684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.073 [2024-11-20 07:10:39.293736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:42.073 [2024-11-20 07:10:39.293834] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:42.073 [2024-11-20 07:10:39.294066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:42.073 pt2 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.073 "name": "raid_bdev1", 00:13:42.073 "uuid": "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e", 00:13:42.073 "strip_size_kb": 0, 00:13:42.073 "state": "configuring", 00:13:42.073 "raid_level": "raid1", 00:13:42.073 "superblock": true, 00:13:42.073 "num_base_bdevs": 3, 00:13:42.073 "num_base_bdevs_discovered": 1, 00:13:42.073 "num_base_bdevs_operational": 2, 00:13:42.073 "base_bdevs_list": [ 00:13:42.073 { 00:13:42.073 "name": null, 00:13:42.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.073 "is_configured": false, 00:13:42.073 "data_offset": 2048, 00:13:42.073 "data_size": 63488 00:13:42.073 }, 00:13:42.073 { 00:13:42.073 "name": "pt2", 00:13:42.073 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.073 "is_configured": true, 00:13:42.073 "data_offset": 2048, 00:13:42.073 "data_size": 63488 00:13:42.073 }, 00:13:42.073 { 00:13:42.073 "name": null, 00:13:42.073 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.073 "is_configured": false, 00:13:42.073 "data_offset": 2048, 00:13:42.073 "data_size": 63488 00:13:42.073 } 00:13:42.073 ] 00:13:42.073 }' 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.073 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.640 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.641 [2024-11-20 07:10:39.814929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:42.641 [2024-11-20 07:10:39.815145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.641 [2024-11-20 07:10:39.815321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:42.641 [2024-11-20 07:10:39.815457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.641 [2024-11-20 07:10:39.816168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.641 [2024-11-20 07:10:39.816217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:42.641 [2024-11-20 07:10:39.816342] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:42.641 [2024-11-20 07:10:39.816384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:42.641 [2024-11-20 07:10:39.816529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:42.641 [2024-11-20 07:10:39.816558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.641 [2024-11-20 07:10:39.816899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:42.641 [2024-11-20 07:10:39.817096] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:42.641 [2024-11-20 07:10:39.817112] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:42.641 [2024-11-20 07:10:39.817281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.641 pt3 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.641 "name": "raid_bdev1", 00:13:42.641 "uuid": "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e", 00:13:42.641 "strip_size_kb": 0, 00:13:42.641 "state": "online", 00:13:42.641 "raid_level": "raid1", 00:13:42.641 "superblock": true, 00:13:42.641 "num_base_bdevs": 3, 00:13:42.641 "num_base_bdevs_discovered": 2, 00:13:42.641 "num_base_bdevs_operational": 2, 00:13:42.641 "base_bdevs_list": [ 00:13:42.641 { 00:13:42.641 "name": null, 00:13:42.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.641 "is_configured": false, 00:13:42.641 "data_offset": 2048, 00:13:42.641 "data_size": 63488 00:13:42.641 }, 00:13:42.641 { 00:13:42.641 "name": "pt2", 00:13:42.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.641 "is_configured": true, 00:13:42.641 "data_offset": 2048, 00:13:42.641 "data_size": 63488 00:13:42.641 }, 00:13:42.641 { 00:13:42.641 "name": "pt3", 00:13:42.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.641 "is_configured": true, 00:13:42.641 "data_offset": 2048, 00:13:42.641 "data_size": 63488 00:13:42.641 } 00:13:42.641 ] 00:13:42.641 }' 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.641 07:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.208 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.208 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.209 [2024-11-20 07:10:40.291037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.209 [2024-11-20 07:10:40.291076] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.209 [2024-11-20 07:10:40.291170] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.209 [2024-11-20 07:10:40.291269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.209 [2024-11-20 07:10:40.291300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.209 [2024-11-20 07:10:40.363065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:43.209 [2024-11-20 07:10:40.363315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.209 [2024-11-20 07:10:40.363356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:43.209 [2024-11-20 07:10:40.363372] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.209 [2024-11-20 07:10:40.366567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.209 [2024-11-20 07:10:40.366610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:43.209 [2024-11-20 07:10:40.366723] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:43.209 [2024-11-20 07:10:40.366793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:43.209 [2024-11-20 07:10:40.367018] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:43.209 [2024-11-20 07:10:40.367037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.209 [2024-11-20 07:10:40.367059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:43.209 [2024-11-20 07:10:40.367130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:43.209 pt1 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.209 "name": "raid_bdev1", 00:13:43.209 "uuid": "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e", 00:13:43.209 "strip_size_kb": 0, 00:13:43.209 "state": "configuring", 00:13:43.209 "raid_level": "raid1", 00:13:43.209 "superblock": true, 00:13:43.209 "num_base_bdevs": 3, 00:13:43.209 "num_base_bdevs_discovered": 1, 00:13:43.209 "num_base_bdevs_operational": 2, 00:13:43.209 "base_bdevs_list": [ 00:13:43.209 { 00:13:43.209 "name": null, 00:13:43.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.209 "is_configured": false, 00:13:43.209 "data_offset": 2048, 00:13:43.209 "data_size": 63488 00:13:43.209 }, 00:13:43.209 { 00:13:43.209 "name": "pt2", 00:13:43.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.209 "is_configured": true, 00:13:43.209 "data_offset": 2048, 00:13:43.209 "data_size": 63488 00:13:43.209 }, 00:13:43.209 { 00:13:43.209 "name": null, 00:13:43.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.209 "is_configured": false, 00:13:43.209 "data_offset": 2048, 00:13:43.209 "data_size": 63488 00:13:43.209 } 00:13:43.209 ] 00:13:43.209 }' 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.209 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.779 [2024-11-20 07:10:40.907297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:43.779 [2024-11-20 07:10:40.907541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:43.779 [2024-11-20 07:10:40.907586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:43.779 [2024-11-20 07:10:40.907602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:43.779 [2024-11-20 07:10:40.908251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:43.779 [2024-11-20 07:10:40.908283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:43.779 [2024-11-20 07:10:40.908395] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:43.779 [2024-11-20 07:10:40.908461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:43.779 [2024-11-20 07:10:40.908630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:43.779 [2024-11-20 07:10:40.908646] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:43.779 [2024-11-20 07:10:40.908984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:43.779 [2024-11-20 07:10:40.909193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:43.779 [2024-11-20 07:10:40.909215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:43.779 [2024-11-20 07:10:40.909391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.779 pt3 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.779 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.779 "name": "raid_bdev1", 00:13:43.779 "uuid": "b9701b4c-5d3f-44b3-b2e8-f368af83ca3e", 00:13:43.779 "strip_size_kb": 0, 00:13:43.779 "state": "online", 00:13:43.779 "raid_level": "raid1", 00:13:43.779 "superblock": true, 00:13:43.779 "num_base_bdevs": 3, 00:13:43.779 "num_base_bdevs_discovered": 2, 00:13:43.779 "num_base_bdevs_operational": 2, 00:13:43.779 "base_bdevs_list": [ 00:13:43.779 { 00:13:43.779 "name": null, 00:13:43.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.779 "is_configured": false, 00:13:43.779 "data_offset": 2048, 00:13:43.779 "data_size": 63488 00:13:43.779 }, 00:13:43.779 { 00:13:43.779 "name": "pt2", 00:13:43.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.779 "is_configured": true, 00:13:43.779 "data_offset": 2048, 00:13:43.779 "data_size": 63488 00:13:43.780 }, 00:13:43.780 { 00:13:43.780 "name": "pt3", 00:13:43.780 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.780 "is_configured": true, 00:13:43.780 "data_offset": 2048, 00:13:43.780 "data_size": 63488 00:13:43.780 } 00:13:43.780 ] 00:13:43.780 }' 00:13:43.780 07:10:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.780 07:10:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.349 [2024-11-20 07:10:41.479812] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b9701b4c-5d3f-44b3-b2e8-f368af83ca3e '!=' b9701b4c-5d3f-44b3-b2e8-f368af83ca3e ']' 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68638 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68638 ']' 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68638 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68638 00:13:44.349 killing process with pid 68638 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68638' 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68638 00:13:44.349 [2024-11-20 07:10:41.561065] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:44.349 07:10:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68638 00:13:44.349 [2024-11-20 07:10:41.561182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:44.349 [2024-11-20 07:10:41.561262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:44.349 [2024-11-20 07:10:41.561281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:44.609 [2024-11-20 07:10:41.839137] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:45.986 ************************************ 00:13:45.986 END TEST raid_superblock_test 00:13:45.986 ************************************ 00:13:45.986 07:10:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:45.986 00:13:45.986 real 0m8.564s 00:13:45.986 user 0m14.014s 00:13:45.986 sys 0m1.188s 00:13:45.986 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.986 07:10:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.986 07:10:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:13:45.986 07:10:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:45.986 07:10:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.986 07:10:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:45.986 ************************************ 00:13:45.986 START TEST raid_read_error_test 00:13:45.986 ************************************ 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.n9KLFWUVae 00:13:45.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.986 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69095 00:13:45.987 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69095 00:13:45.987 07:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:45.987 07:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69095 ']' 00:13:45.987 07:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.987 07:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:45.987 07:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.987 07:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:45.987 07:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.987 [2024-11-20 07:10:43.056505] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:13:45.987 [2024-11-20 07:10:43.056924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69095 ] 00:13:45.987 [2024-11-20 07:10:43.245641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.246 [2024-11-20 07:10:43.405679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.505 [2024-11-20 07:10:43.619127] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.505 [2024-11-20 07:10:43.619169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:46.763 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:46.763 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:46.763 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:46.763 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:46.763 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.763 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.763 BaseBdev1_malloc 00:13:46.763 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.763 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:46.763 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.763 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.023 true 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.023 [2024-11-20 07:10:44.088285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:47.023 [2024-11-20 07:10:44.088369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.023 [2024-11-20 07:10:44.088397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:47.023 [2024-11-20 07:10:44.088413] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.023 [2024-11-20 07:10:44.091195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.023 [2024-11-20 07:10:44.091261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.023 BaseBdev1 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.023 BaseBdev2_malloc 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.023 true 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.023 [2024-11-20 07:10:44.148301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:47.023 [2024-11-20 07:10:44.148387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.023 [2024-11-20 07:10:44.148420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:47.023 [2024-11-20 07:10:44.148436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.023 [2024-11-20 07:10:44.151389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.023 BaseBdev2 00:13:47.023 [2024-11-20 07:10:44.151613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.023 BaseBdev3_malloc 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.023 true 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.023 [2024-11-20 07:10:44.222792] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:47.023 [2024-11-20 07:10:44.222880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.023 [2024-11-20 07:10:44.222911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:47.023 [2024-11-20 07:10:44.222929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.023 [2024-11-20 07:10:44.225936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.023 [2024-11-20 07:10:44.226004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:47.023 BaseBdev3 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.023 [2024-11-20 07:10:44.230934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.023 [2024-11-20 07:10:44.233627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.023 [2024-11-20 07:10:44.233858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.023 [2024-11-20 07:10:44.234195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:47.023 [2024-11-20 07:10:44.234215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:47.023 [2024-11-20 07:10:44.234549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:47.023 [2024-11-20 07:10:44.234797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:47.023 [2024-11-20 07:10:44.234819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:47.023 [2024-11-20 07:10:44.235081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.023 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.024 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.024 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.024 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.024 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.024 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.024 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.024 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.024 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.024 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.024 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.024 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.024 "name": "raid_bdev1", 00:13:47.024 "uuid": "29cb6b66-37a1-47d7-afcb-454ab10f4e48", 00:13:47.024 "strip_size_kb": 0, 00:13:47.024 "state": "online", 00:13:47.024 "raid_level": "raid1", 00:13:47.024 "superblock": true, 00:13:47.024 "num_base_bdevs": 3, 00:13:47.024 "num_base_bdevs_discovered": 3, 00:13:47.024 "num_base_bdevs_operational": 3, 00:13:47.024 "base_bdevs_list": [ 00:13:47.024 { 00:13:47.024 "name": "BaseBdev1", 00:13:47.024 "uuid": "07289503-f3a8-5401-be60-f073ddb2a9cc", 00:13:47.024 "is_configured": true, 00:13:47.024 "data_offset": 2048, 00:13:47.024 "data_size": 63488 00:13:47.024 }, 00:13:47.024 { 00:13:47.024 "name": "BaseBdev2", 00:13:47.024 "uuid": "af49409b-9813-5525-878d-48cf06911a4f", 00:13:47.024 "is_configured": true, 00:13:47.024 "data_offset": 2048, 00:13:47.024 "data_size": 63488 00:13:47.024 }, 00:13:47.024 { 00:13:47.024 "name": "BaseBdev3", 00:13:47.024 "uuid": "252a0fdd-e8de-5903-981b-568f75dfc37f", 00:13:47.024 "is_configured": true, 00:13:47.024 "data_offset": 2048, 00:13:47.024 "data_size": 63488 00:13:47.024 } 00:13:47.024 ] 00:13:47.024 }' 00:13:47.024 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.024 07:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.606 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:47.606 07:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:47.606 [2024-11-20 07:10:44.840717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.542 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.542 "name": "raid_bdev1", 00:13:48.542 "uuid": "29cb6b66-37a1-47d7-afcb-454ab10f4e48", 00:13:48.542 "strip_size_kb": 0, 00:13:48.542 "state": "online", 00:13:48.542 "raid_level": "raid1", 00:13:48.542 "superblock": true, 00:13:48.542 "num_base_bdevs": 3, 00:13:48.542 "num_base_bdevs_discovered": 3, 00:13:48.542 "num_base_bdevs_operational": 3, 00:13:48.542 "base_bdevs_list": [ 00:13:48.542 { 00:13:48.542 "name": "BaseBdev1", 00:13:48.542 "uuid": "07289503-f3a8-5401-be60-f073ddb2a9cc", 00:13:48.542 "is_configured": true, 00:13:48.542 "data_offset": 2048, 00:13:48.542 "data_size": 63488 00:13:48.542 }, 00:13:48.542 { 00:13:48.542 "name": "BaseBdev2", 00:13:48.542 "uuid": "af49409b-9813-5525-878d-48cf06911a4f", 00:13:48.542 "is_configured": true, 00:13:48.543 "data_offset": 2048, 00:13:48.543 "data_size": 63488 00:13:48.543 }, 00:13:48.543 { 00:13:48.543 "name": "BaseBdev3", 00:13:48.543 "uuid": "252a0fdd-e8de-5903-981b-568f75dfc37f", 00:13:48.543 "is_configured": true, 00:13:48.543 "data_offset": 2048, 00:13:48.543 "data_size": 63488 00:13:48.543 } 00:13:48.543 ] 00:13:48.543 }' 00:13:48.543 07:10:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.543 07:10:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.110 [2024-11-20 07:10:46.290514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.110 [2024-11-20 07:10:46.290547] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.110 [2024-11-20 07:10:46.294142] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.110 [2024-11-20 07:10:46.294372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.110 [2024-11-20 07:10:46.294706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.110 [2024-11-20 07:10:46.294936] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:49.110 { 00:13:49.110 "results": [ 00:13:49.110 { 00:13:49.110 "job": "raid_bdev1", 00:13:49.110 "core_mask": "0x1", 00:13:49.110 "workload": "randrw", 00:13:49.110 "percentage": 50, 00:13:49.110 "status": "finished", 00:13:49.110 "queue_depth": 1, 00:13:49.110 "io_size": 131072, 00:13:49.110 "runtime": 1.447051, 00:13:49.110 "iops": 9380.45721954513, 00:13:49.110 "mibps": 1172.5571524431411, 00:13:49.110 "io_failed": 0, 00:13:49.110 "io_timeout": 0, 00:13:49.110 "avg_latency_us": 102.43094686365644, 00:13:49.110 "min_latency_us": 37.93454545454546, 00:13:49.110 "max_latency_us": 1995.8690909090908 00:13:49.110 } 00:13:49.110 ], 00:13:49.110 "core_count": 1 00:13:49.110 } 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69095 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69095 ']' 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69095 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69095 00:13:49.110 killing process with pid 69095 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69095' 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69095 00:13:49.110 [2024-11-20 07:10:46.336292] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:49.110 07:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69095 00:13:49.369 [2024-11-20 07:10:46.541660] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.746 07:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.n9KLFWUVae 00:13:50.746 07:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:50.746 07:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:50.746 07:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:50.746 07:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:50.746 07:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:50.746 07:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:50.746 07:10:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:50.746 00:13:50.746 real 0m4.699s 00:13:50.746 user 0m5.803s 00:13:50.746 sys 0m0.584s 00:13:50.746 07:10:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.746 07:10:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.746 ************************************ 00:13:50.746 END TEST raid_read_error_test 00:13:50.746 ************************************ 00:13:50.746 07:10:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:13:50.746 07:10:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:50.746 07:10:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.746 07:10:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.746 ************************************ 00:13:50.747 START TEST raid_write_error_test 00:13:50.747 ************************************ 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.m4QDNSJ0tn 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69241 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69241 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69241 ']' 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:50.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:50.747 07:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.747 [2024-11-20 07:10:47.798109] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:13:50.747 [2024-11-20 07:10:47.799200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69241 ] 00:13:50.747 [2024-11-20 07:10:47.991926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.005 [2024-11-20 07:10:48.124224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.263 [2024-11-20 07:10:48.326645] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.263 [2024-11-20 07:10:48.326999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.832 BaseBdev1_malloc 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.832 true 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.832 [2024-11-20 07:10:48.911876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:51.832 [2024-11-20 07:10:48.911962] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.832 [2024-11-20 07:10:48.911992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:51.832 [2024-11-20 07:10:48.912011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.832 [2024-11-20 07:10:48.914909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.832 [2024-11-20 07:10:48.914981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:51.832 BaseBdev1 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.832 BaseBdev2_malloc 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.832 true 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.832 [2024-11-20 07:10:48.971367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:51.832 [2024-11-20 07:10:48.971452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.832 [2024-11-20 07:10:48.971478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:51.832 [2024-11-20 07:10:48.971493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.832 [2024-11-20 07:10:48.974335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.832 [2024-11-20 07:10:48.974396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:51.832 BaseBdev2 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.832 07:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.832 BaseBdev3_malloc 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.832 true 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.832 [2024-11-20 07:10:49.047530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:51.832 [2024-11-20 07:10:49.047785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.832 [2024-11-20 07:10:49.047823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:51.832 [2024-11-20 07:10:49.047842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.832 [2024-11-20 07:10:49.050768] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.832 [2024-11-20 07:10:49.050834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:51.832 BaseBdev3 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.832 [2024-11-20 07:10:49.055652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:51.832 [2024-11-20 07:10:49.058201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:51.832 [2024-11-20 07:10:49.058314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:51.832 [2024-11-20 07:10:49.058629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:51.832 [2024-11-20 07:10:49.058649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:51.832 [2024-11-20 07:10:49.058973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:51.832 [2024-11-20 07:10:49.059219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:51.832 [2024-11-20 07:10:49.059239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:51.832 [2024-11-20 07:10:49.059469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.832 "name": "raid_bdev1", 00:13:51.832 "uuid": "7face1fe-81f2-4a9c-b621-f558b7ca72f2", 00:13:51.832 "strip_size_kb": 0, 00:13:51.832 "state": "online", 00:13:51.832 "raid_level": "raid1", 00:13:51.832 "superblock": true, 00:13:51.832 "num_base_bdevs": 3, 00:13:51.832 "num_base_bdevs_discovered": 3, 00:13:51.832 "num_base_bdevs_operational": 3, 00:13:51.832 "base_bdevs_list": [ 00:13:51.832 { 00:13:51.832 "name": "BaseBdev1", 00:13:51.832 "uuid": "d49d63c9-8bd2-56d7-8dec-fff922b4acc2", 00:13:51.832 "is_configured": true, 00:13:51.832 "data_offset": 2048, 00:13:51.832 "data_size": 63488 00:13:51.832 }, 00:13:51.832 { 00:13:51.832 "name": "BaseBdev2", 00:13:51.832 "uuid": "7ad09108-0275-540a-b007-98d1ff83ff11", 00:13:51.832 "is_configured": true, 00:13:51.832 "data_offset": 2048, 00:13:51.832 "data_size": 63488 00:13:51.832 }, 00:13:51.832 { 00:13:51.832 "name": "BaseBdev3", 00:13:51.832 "uuid": "4a4531f7-d28d-5bff-95a3-634c75eefcac", 00:13:51.832 "is_configured": true, 00:13:51.832 "data_offset": 2048, 00:13:51.832 "data_size": 63488 00:13:51.832 } 00:13:51.832 ] 00:13:51.832 }' 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.832 07:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.400 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:52.400 07:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:52.400 [2024-11-20 07:10:49.697284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.336 [2024-11-20 07:10:50.578217] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:53.336 [2024-11-20 07:10:50.578316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:53.336 [2024-11-20 07:10:50.578587] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.336 "name": "raid_bdev1", 00:13:53.336 "uuid": "7face1fe-81f2-4a9c-b621-f558b7ca72f2", 00:13:53.336 "strip_size_kb": 0, 00:13:53.336 "state": "online", 00:13:53.336 "raid_level": "raid1", 00:13:53.336 "superblock": true, 00:13:53.336 "num_base_bdevs": 3, 00:13:53.336 "num_base_bdevs_discovered": 2, 00:13:53.336 "num_base_bdevs_operational": 2, 00:13:53.336 "base_bdevs_list": [ 00:13:53.336 { 00:13:53.336 "name": null, 00:13:53.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.336 "is_configured": false, 00:13:53.336 "data_offset": 0, 00:13:53.336 "data_size": 63488 00:13:53.336 }, 00:13:53.336 { 00:13:53.336 "name": "BaseBdev2", 00:13:53.336 "uuid": "7ad09108-0275-540a-b007-98d1ff83ff11", 00:13:53.336 "is_configured": true, 00:13:53.336 "data_offset": 2048, 00:13:53.336 "data_size": 63488 00:13:53.336 }, 00:13:53.336 { 00:13:53.336 "name": "BaseBdev3", 00:13:53.336 "uuid": "4a4531f7-d28d-5bff-95a3-634c75eefcac", 00:13:53.336 "is_configured": true, 00:13:53.336 "data_offset": 2048, 00:13:53.336 "data_size": 63488 00:13:53.336 } 00:13:53.336 ] 00:13:53.336 }' 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.336 07:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.903 [2024-11-20 07:10:51.144330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.903 [2024-11-20 07:10:51.144513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.903 [2024-11-20 07:10:51.148057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.903 [2024-11-20 07:10:51.148331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.903 [2024-11-20 07:10:51.148626] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.903 [2024-11-20 07:10:51.148859] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, sta{ 00:13:53.903 "results": [ 00:13:53.903 { 00:13:53.903 "job": "raid_bdev1", 00:13:53.903 "core_mask": "0x1", 00:13:53.903 "workload": "randrw", 00:13:53.903 "percentage": 50, 00:13:53.903 "status": "finished", 00:13:53.903 "queue_depth": 1, 00:13:53.903 "io_size": 131072, 00:13:53.903 "runtime": 1.444468, 00:13:53.903 "iops": 10106.835180841666, 00:13:53.903 "mibps": 1263.3543976052083, 00:13:53.903 "io_failed": 0, 00:13:53.903 "io_timeout": 0, 00:13:53.903 "avg_latency_us": 94.62539626001781, 00:13:53.903 "min_latency_us": 39.56363636363636, 00:13:53.903 "max_latency_us": 1817.1345454545456 00:13:53.903 } 00:13:53.903 ], 00:13:53.903 "core_count": 1 00:13:53.903 } 00:13:53.903 te offline 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69241 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69241 ']' 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69241 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69241 00:13:53.903 killing process with pid 69241 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69241' 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69241 00:13:53.903 [2024-11-20 07:10:51.191279] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.903 07:10:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69241 00:13:54.163 [2024-11-20 07:10:51.398901] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:55.587 07:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:55.587 07:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.m4QDNSJ0tn 00:13:55.587 07:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:55.587 07:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:55.587 07:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:55.587 07:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:55.587 07:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:55.587 07:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:55.587 00:13:55.587 real 0m4.822s 00:13:55.587 user 0m6.016s 00:13:55.587 sys 0m0.608s 00:13:55.587 ************************************ 00:13:55.587 END TEST raid_write_error_test 00:13:55.587 ************************************ 00:13:55.587 07:10:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:55.587 07:10:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.587 07:10:52 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:13:55.587 07:10:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:55.587 07:10:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:13:55.587 07:10:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:55.587 07:10:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:55.587 07:10:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:55.587 ************************************ 00:13:55.587 START TEST raid_state_function_test 00:13:55.587 ************************************ 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:55.587 Process raid pid: 69385 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69385 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69385' 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69385 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69385 ']' 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:55.587 07:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.587 [2024-11-20 07:10:52.674353] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:13:55.587 [2024-11-20 07:10:52.674514] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.587 [2024-11-20 07:10:52.856030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.845 [2024-11-20 07:10:53.012619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.104 [2024-11-20 07:10:53.222802] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.104 [2024-11-20 07:10:53.223089] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.671 07:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:56.671 07:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.672 [2024-11-20 07:10:53.689413] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:56.672 [2024-11-20 07:10:53.689622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:56.672 [2024-11-20 07:10:53.689790] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.672 [2024-11-20 07:10:53.689856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.672 [2024-11-20 07:10:53.690049] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:56.672 [2024-11-20 07:10:53.690113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:56.672 [2024-11-20 07:10:53.690327] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:56.672 [2024-11-20 07:10:53.690389] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.672 "name": "Existed_Raid", 00:13:56.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.672 "strip_size_kb": 64, 00:13:56.672 "state": "configuring", 00:13:56.672 "raid_level": "raid0", 00:13:56.672 "superblock": false, 00:13:56.672 "num_base_bdevs": 4, 00:13:56.672 "num_base_bdevs_discovered": 0, 00:13:56.672 "num_base_bdevs_operational": 4, 00:13:56.672 "base_bdevs_list": [ 00:13:56.672 { 00:13:56.672 "name": "BaseBdev1", 00:13:56.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.672 "is_configured": false, 00:13:56.672 "data_offset": 0, 00:13:56.672 "data_size": 0 00:13:56.672 }, 00:13:56.672 { 00:13:56.672 "name": "BaseBdev2", 00:13:56.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.672 "is_configured": false, 00:13:56.672 "data_offset": 0, 00:13:56.672 "data_size": 0 00:13:56.672 }, 00:13:56.672 { 00:13:56.672 "name": "BaseBdev3", 00:13:56.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.672 "is_configured": false, 00:13:56.672 "data_offset": 0, 00:13:56.672 "data_size": 0 00:13:56.672 }, 00:13:56.672 { 00:13:56.672 "name": "BaseBdev4", 00:13:56.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.672 "is_configured": false, 00:13:56.672 "data_offset": 0, 00:13:56.672 "data_size": 0 00:13:56.672 } 00:13:56.672 ] 00:13:56.672 }' 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.672 07:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.931 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:56.931 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.931 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.931 [2024-11-20 07:10:54.213534] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:56.931 [2024-11-20 07:10:54.213592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:56.931 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.931 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:56.931 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.931 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.931 [2024-11-20 07:10:54.221526] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:56.931 [2024-11-20 07:10:54.221606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:56.931 [2024-11-20 07:10:54.221622] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.931 [2024-11-20 07:10:54.221638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.931 [2024-11-20 07:10:54.221648] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:56.931 [2024-11-20 07:10:54.221662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:56.931 [2024-11-20 07:10:54.221672] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:56.931 [2024-11-20 07:10:54.221686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:56.931 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.931 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.931 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.931 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.191 [2024-11-20 07:10:54.266709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.191 BaseBdev1 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.191 [ 00:13:57.191 { 00:13:57.191 "name": "BaseBdev1", 00:13:57.191 "aliases": [ 00:13:57.191 "d2a54d99-0d89-484e-a288-315ba53639e2" 00:13:57.191 ], 00:13:57.191 "product_name": "Malloc disk", 00:13:57.191 "block_size": 512, 00:13:57.191 "num_blocks": 65536, 00:13:57.191 "uuid": "d2a54d99-0d89-484e-a288-315ba53639e2", 00:13:57.191 "assigned_rate_limits": { 00:13:57.191 "rw_ios_per_sec": 0, 00:13:57.191 "rw_mbytes_per_sec": 0, 00:13:57.191 "r_mbytes_per_sec": 0, 00:13:57.191 "w_mbytes_per_sec": 0 00:13:57.191 }, 00:13:57.191 "claimed": true, 00:13:57.191 "claim_type": "exclusive_write", 00:13:57.191 "zoned": false, 00:13:57.191 "supported_io_types": { 00:13:57.191 "read": true, 00:13:57.191 "write": true, 00:13:57.191 "unmap": true, 00:13:57.191 "flush": true, 00:13:57.191 "reset": true, 00:13:57.191 "nvme_admin": false, 00:13:57.191 "nvme_io": false, 00:13:57.191 "nvme_io_md": false, 00:13:57.191 "write_zeroes": true, 00:13:57.191 "zcopy": true, 00:13:57.191 "get_zone_info": false, 00:13:57.191 "zone_management": false, 00:13:57.191 "zone_append": false, 00:13:57.191 "compare": false, 00:13:57.191 "compare_and_write": false, 00:13:57.191 "abort": true, 00:13:57.191 "seek_hole": false, 00:13:57.191 "seek_data": false, 00:13:57.191 "copy": true, 00:13:57.191 "nvme_iov_md": false 00:13:57.191 }, 00:13:57.191 "memory_domains": [ 00:13:57.191 { 00:13:57.191 "dma_device_id": "system", 00:13:57.191 "dma_device_type": 1 00:13:57.191 }, 00:13:57.191 { 00:13:57.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:57.191 "dma_device_type": 2 00:13:57.191 } 00:13:57.191 ], 00:13:57.191 "driver_specific": {} 00:13:57.191 } 00:13:57.191 ] 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:57.191 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.192 "name": "Existed_Raid", 00:13:57.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.192 "strip_size_kb": 64, 00:13:57.192 "state": "configuring", 00:13:57.192 "raid_level": "raid0", 00:13:57.192 "superblock": false, 00:13:57.192 "num_base_bdevs": 4, 00:13:57.192 "num_base_bdevs_discovered": 1, 00:13:57.192 "num_base_bdevs_operational": 4, 00:13:57.192 "base_bdevs_list": [ 00:13:57.192 { 00:13:57.192 "name": "BaseBdev1", 00:13:57.192 "uuid": "d2a54d99-0d89-484e-a288-315ba53639e2", 00:13:57.192 "is_configured": true, 00:13:57.192 "data_offset": 0, 00:13:57.192 "data_size": 65536 00:13:57.192 }, 00:13:57.192 { 00:13:57.192 "name": "BaseBdev2", 00:13:57.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.192 "is_configured": false, 00:13:57.192 "data_offset": 0, 00:13:57.192 "data_size": 0 00:13:57.192 }, 00:13:57.192 { 00:13:57.192 "name": "BaseBdev3", 00:13:57.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.192 "is_configured": false, 00:13:57.192 "data_offset": 0, 00:13:57.192 "data_size": 0 00:13:57.192 }, 00:13:57.192 { 00:13:57.192 "name": "BaseBdev4", 00:13:57.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.192 "is_configured": false, 00:13:57.192 "data_offset": 0, 00:13:57.192 "data_size": 0 00:13:57.192 } 00:13:57.192 ] 00:13:57.192 }' 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.192 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.760 [2024-11-20 07:10:54.822966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:57.760 [2024-11-20 07:10:54.823028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.760 [2024-11-20 07:10:54.834993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.760 [2024-11-20 07:10:54.837668] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:57.760 [2024-11-20 07:10:54.837859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:57.760 [2024-11-20 07:10:54.837989] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:57.760 [2024-11-20 07:10:54.838122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:57.760 [2024-11-20 07:10:54.838227] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:57.760 [2024-11-20 07:10:54.838364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.760 "name": "Existed_Raid", 00:13:57.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.760 "strip_size_kb": 64, 00:13:57.760 "state": "configuring", 00:13:57.760 "raid_level": "raid0", 00:13:57.760 "superblock": false, 00:13:57.760 "num_base_bdevs": 4, 00:13:57.760 "num_base_bdevs_discovered": 1, 00:13:57.760 "num_base_bdevs_operational": 4, 00:13:57.760 "base_bdevs_list": [ 00:13:57.760 { 00:13:57.760 "name": "BaseBdev1", 00:13:57.760 "uuid": "d2a54d99-0d89-484e-a288-315ba53639e2", 00:13:57.760 "is_configured": true, 00:13:57.760 "data_offset": 0, 00:13:57.760 "data_size": 65536 00:13:57.760 }, 00:13:57.760 { 00:13:57.760 "name": "BaseBdev2", 00:13:57.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.760 "is_configured": false, 00:13:57.760 "data_offset": 0, 00:13:57.760 "data_size": 0 00:13:57.760 }, 00:13:57.760 { 00:13:57.760 "name": "BaseBdev3", 00:13:57.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.760 "is_configured": false, 00:13:57.760 "data_offset": 0, 00:13:57.760 "data_size": 0 00:13:57.760 }, 00:13:57.760 { 00:13:57.760 "name": "BaseBdev4", 00:13:57.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.760 "is_configured": false, 00:13:57.760 "data_offset": 0, 00:13:57.760 "data_size": 0 00:13:57.760 } 00:13:57.760 ] 00:13:57.760 }' 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.760 07:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.327 [2024-11-20 07:10:55.409844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.327 BaseBdev2 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.327 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.327 [ 00:13:58.327 { 00:13:58.327 "name": "BaseBdev2", 00:13:58.327 "aliases": [ 00:13:58.327 "f7a478f6-a9bb-49e4-a7fe-9441ab7b2877" 00:13:58.327 ], 00:13:58.327 "product_name": "Malloc disk", 00:13:58.327 "block_size": 512, 00:13:58.327 "num_blocks": 65536, 00:13:58.327 "uuid": "f7a478f6-a9bb-49e4-a7fe-9441ab7b2877", 00:13:58.327 "assigned_rate_limits": { 00:13:58.327 "rw_ios_per_sec": 0, 00:13:58.327 "rw_mbytes_per_sec": 0, 00:13:58.328 "r_mbytes_per_sec": 0, 00:13:58.328 "w_mbytes_per_sec": 0 00:13:58.328 }, 00:13:58.328 "claimed": true, 00:13:58.328 "claim_type": "exclusive_write", 00:13:58.328 "zoned": false, 00:13:58.328 "supported_io_types": { 00:13:58.328 "read": true, 00:13:58.328 "write": true, 00:13:58.328 "unmap": true, 00:13:58.328 "flush": true, 00:13:58.328 "reset": true, 00:13:58.328 "nvme_admin": false, 00:13:58.328 "nvme_io": false, 00:13:58.328 "nvme_io_md": false, 00:13:58.328 "write_zeroes": true, 00:13:58.328 "zcopy": true, 00:13:58.328 "get_zone_info": false, 00:13:58.328 "zone_management": false, 00:13:58.328 "zone_append": false, 00:13:58.328 "compare": false, 00:13:58.328 "compare_and_write": false, 00:13:58.328 "abort": true, 00:13:58.328 "seek_hole": false, 00:13:58.328 "seek_data": false, 00:13:58.328 "copy": true, 00:13:58.328 "nvme_iov_md": false 00:13:58.328 }, 00:13:58.328 "memory_domains": [ 00:13:58.328 { 00:13:58.328 "dma_device_id": "system", 00:13:58.328 "dma_device_type": 1 00:13:58.328 }, 00:13:58.328 { 00:13:58.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.328 "dma_device_type": 2 00:13:58.328 } 00:13:58.328 ], 00:13:58.328 "driver_specific": {} 00:13:58.328 } 00:13:58.328 ] 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.328 "name": "Existed_Raid", 00:13:58.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.328 "strip_size_kb": 64, 00:13:58.328 "state": "configuring", 00:13:58.328 "raid_level": "raid0", 00:13:58.328 "superblock": false, 00:13:58.328 "num_base_bdevs": 4, 00:13:58.328 "num_base_bdevs_discovered": 2, 00:13:58.328 "num_base_bdevs_operational": 4, 00:13:58.328 "base_bdevs_list": [ 00:13:58.328 { 00:13:58.328 "name": "BaseBdev1", 00:13:58.328 "uuid": "d2a54d99-0d89-484e-a288-315ba53639e2", 00:13:58.328 "is_configured": true, 00:13:58.328 "data_offset": 0, 00:13:58.328 "data_size": 65536 00:13:58.328 }, 00:13:58.328 { 00:13:58.328 "name": "BaseBdev2", 00:13:58.328 "uuid": "f7a478f6-a9bb-49e4-a7fe-9441ab7b2877", 00:13:58.328 "is_configured": true, 00:13:58.328 "data_offset": 0, 00:13:58.328 "data_size": 65536 00:13:58.328 }, 00:13:58.328 { 00:13:58.328 "name": "BaseBdev3", 00:13:58.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.328 "is_configured": false, 00:13:58.328 "data_offset": 0, 00:13:58.328 "data_size": 0 00:13:58.328 }, 00:13:58.328 { 00:13:58.328 "name": "BaseBdev4", 00:13:58.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.328 "is_configured": false, 00:13:58.328 "data_offset": 0, 00:13:58.328 "data_size": 0 00:13:58.328 } 00:13:58.328 ] 00:13:58.328 }' 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.328 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.894 07:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:58.894 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.894 07:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.894 [2024-11-20 07:10:56.014190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:58.894 BaseBdev3 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.894 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.895 [ 00:13:58.895 { 00:13:58.895 "name": "BaseBdev3", 00:13:58.895 "aliases": [ 00:13:58.895 "c97eb0dd-5a80-456b-8d5f-191f9d63a761" 00:13:58.895 ], 00:13:58.895 "product_name": "Malloc disk", 00:13:58.895 "block_size": 512, 00:13:58.895 "num_blocks": 65536, 00:13:58.895 "uuid": "c97eb0dd-5a80-456b-8d5f-191f9d63a761", 00:13:58.895 "assigned_rate_limits": { 00:13:58.895 "rw_ios_per_sec": 0, 00:13:58.895 "rw_mbytes_per_sec": 0, 00:13:58.895 "r_mbytes_per_sec": 0, 00:13:58.895 "w_mbytes_per_sec": 0 00:13:58.895 }, 00:13:58.895 "claimed": true, 00:13:58.895 "claim_type": "exclusive_write", 00:13:58.895 "zoned": false, 00:13:58.895 "supported_io_types": { 00:13:58.895 "read": true, 00:13:58.895 "write": true, 00:13:58.895 "unmap": true, 00:13:58.895 "flush": true, 00:13:58.895 "reset": true, 00:13:58.895 "nvme_admin": false, 00:13:58.895 "nvme_io": false, 00:13:58.895 "nvme_io_md": false, 00:13:58.895 "write_zeroes": true, 00:13:58.895 "zcopy": true, 00:13:58.895 "get_zone_info": false, 00:13:58.895 "zone_management": false, 00:13:58.895 "zone_append": false, 00:13:58.895 "compare": false, 00:13:58.895 "compare_and_write": false, 00:13:58.895 "abort": true, 00:13:58.895 "seek_hole": false, 00:13:58.895 "seek_data": false, 00:13:58.895 "copy": true, 00:13:58.895 "nvme_iov_md": false 00:13:58.895 }, 00:13:58.895 "memory_domains": [ 00:13:58.895 { 00:13:58.895 "dma_device_id": "system", 00:13:58.895 "dma_device_type": 1 00:13:58.895 }, 00:13:58.895 { 00:13:58.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:58.895 "dma_device_type": 2 00:13:58.895 } 00:13:58.895 ], 00:13:58.895 "driver_specific": {} 00:13:58.895 } 00:13:58.895 ] 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.895 "name": "Existed_Raid", 00:13:58.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.895 "strip_size_kb": 64, 00:13:58.895 "state": "configuring", 00:13:58.895 "raid_level": "raid0", 00:13:58.895 "superblock": false, 00:13:58.895 "num_base_bdevs": 4, 00:13:58.895 "num_base_bdevs_discovered": 3, 00:13:58.895 "num_base_bdevs_operational": 4, 00:13:58.895 "base_bdevs_list": [ 00:13:58.895 { 00:13:58.895 "name": "BaseBdev1", 00:13:58.895 "uuid": "d2a54d99-0d89-484e-a288-315ba53639e2", 00:13:58.895 "is_configured": true, 00:13:58.895 "data_offset": 0, 00:13:58.895 "data_size": 65536 00:13:58.895 }, 00:13:58.895 { 00:13:58.895 "name": "BaseBdev2", 00:13:58.895 "uuid": "f7a478f6-a9bb-49e4-a7fe-9441ab7b2877", 00:13:58.895 "is_configured": true, 00:13:58.895 "data_offset": 0, 00:13:58.895 "data_size": 65536 00:13:58.895 }, 00:13:58.895 { 00:13:58.895 "name": "BaseBdev3", 00:13:58.895 "uuid": "c97eb0dd-5a80-456b-8d5f-191f9d63a761", 00:13:58.895 "is_configured": true, 00:13:58.895 "data_offset": 0, 00:13:58.895 "data_size": 65536 00:13:58.895 }, 00:13:58.895 { 00:13:58.895 "name": "BaseBdev4", 00:13:58.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.895 "is_configured": false, 00:13:58.895 "data_offset": 0, 00:13:58.895 "data_size": 0 00:13:58.895 } 00:13:58.895 ] 00:13:58.895 }' 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.895 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.462 [2024-11-20 07:10:56.610082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:59.462 [2024-11-20 07:10:56.610335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:59.462 [2024-11-20 07:10:56.610361] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:59.462 [2024-11-20 07:10:56.610739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:59.462 [2024-11-20 07:10:56.611000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:59.462 [2024-11-20 07:10:56.611026] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:59.462 BaseBdev4 00:13:59.462 [2024-11-20 07:10:56.611336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.462 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.462 [ 00:13:59.462 { 00:13:59.462 "name": "BaseBdev4", 00:13:59.462 "aliases": [ 00:13:59.462 "88acc29f-45ca-4862-acf9-b8eab54fe88d" 00:13:59.462 ], 00:13:59.462 "product_name": "Malloc disk", 00:13:59.462 "block_size": 512, 00:13:59.463 "num_blocks": 65536, 00:13:59.463 "uuid": "88acc29f-45ca-4862-acf9-b8eab54fe88d", 00:13:59.463 "assigned_rate_limits": { 00:13:59.463 "rw_ios_per_sec": 0, 00:13:59.463 "rw_mbytes_per_sec": 0, 00:13:59.463 "r_mbytes_per_sec": 0, 00:13:59.463 "w_mbytes_per_sec": 0 00:13:59.463 }, 00:13:59.463 "claimed": true, 00:13:59.463 "claim_type": "exclusive_write", 00:13:59.463 "zoned": false, 00:13:59.463 "supported_io_types": { 00:13:59.463 "read": true, 00:13:59.463 "write": true, 00:13:59.463 "unmap": true, 00:13:59.463 "flush": true, 00:13:59.463 "reset": true, 00:13:59.463 "nvme_admin": false, 00:13:59.463 "nvme_io": false, 00:13:59.463 "nvme_io_md": false, 00:13:59.463 "write_zeroes": true, 00:13:59.463 "zcopy": true, 00:13:59.463 "get_zone_info": false, 00:13:59.463 "zone_management": false, 00:13:59.463 "zone_append": false, 00:13:59.463 "compare": false, 00:13:59.463 "compare_and_write": false, 00:13:59.463 "abort": true, 00:13:59.463 "seek_hole": false, 00:13:59.463 "seek_data": false, 00:13:59.463 "copy": true, 00:13:59.463 "nvme_iov_md": false 00:13:59.463 }, 00:13:59.463 "memory_domains": [ 00:13:59.463 { 00:13:59.463 "dma_device_id": "system", 00:13:59.463 "dma_device_type": 1 00:13:59.463 }, 00:13:59.463 { 00:13:59.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:59.463 "dma_device_type": 2 00:13:59.463 } 00:13:59.463 ], 00:13:59.463 "driver_specific": {} 00:13:59.463 } 00:13:59.463 ] 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.463 "name": "Existed_Raid", 00:13:59.463 "uuid": "5c14bbda-c732-463d-8b64-6c1a27b4a140", 00:13:59.463 "strip_size_kb": 64, 00:13:59.463 "state": "online", 00:13:59.463 "raid_level": "raid0", 00:13:59.463 "superblock": false, 00:13:59.463 "num_base_bdevs": 4, 00:13:59.463 "num_base_bdevs_discovered": 4, 00:13:59.463 "num_base_bdevs_operational": 4, 00:13:59.463 "base_bdevs_list": [ 00:13:59.463 { 00:13:59.463 "name": "BaseBdev1", 00:13:59.463 "uuid": "d2a54d99-0d89-484e-a288-315ba53639e2", 00:13:59.463 "is_configured": true, 00:13:59.463 "data_offset": 0, 00:13:59.463 "data_size": 65536 00:13:59.463 }, 00:13:59.463 { 00:13:59.463 "name": "BaseBdev2", 00:13:59.463 "uuid": "f7a478f6-a9bb-49e4-a7fe-9441ab7b2877", 00:13:59.463 "is_configured": true, 00:13:59.463 "data_offset": 0, 00:13:59.463 "data_size": 65536 00:13:59.463 }, 00:13:59.463 { 00:13:59.463 "name": "BaseBdev3", 00:13:59.463 "uuid": "c97eb0dd-5a80-456b-8d5f-191f9d63a761", 00:13:59.463 "is_configured": true, 00:13:59.463 "data_offset": 0, 00:13:59.463 "data_size": 65536 00:13:59.463 }, 00:13:59.463 { 00:13:59.463 "name": "BaseBdev4", 00:13:59.463 "uuid": "88acc29f-45ca-4862-acf9-b8eab54fe88d", 00:13:59.463 "is_configured": true, 00:13:59.463 "data_offset": 0, 00:13:59.463 "data_size": 65536 00:13:59.463 } 00:13:59.463 ] 00:13:59.463 }' 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.463 07:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.030 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:00.030 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:00.030 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:00.030 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:00.030 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:00.030 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:00.030 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:00.030 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:00.030 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.030 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.030 [2024-11-20 07:10:57.226754] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.030 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.030 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:00.030 "name": "Existed_Raid", 00:14:00.030 "aliases": [ 00:14:00.031 "5c14bbda-c732-463d-8b64-6c1a27b4a140" 00:14:00.031 ], 00:14:00.031 "product_name": "Raid Volume", 00:14:00.031 "block_size": 512, 00:14:00.031 "num_blocks": 262144, 00:14:00.031 "uuid": "5c14bbda-c732-463d-8b64-6c1a27b4a140", 00:14:00.031 "assigned_rate_limits": { 00:14:00.031 "rw_ios_per_sec": 0, 00:14:00.031 "rw_mbytes_per_sec": 0, 00:14:00.031 "r_mbytes_per_sec": 0, 00:14:00.031 "w_mbytes_per_sec": 0 00:14:00.031 }, 00:14:00.031 "claimed": false, 00:14:00.031 "zoned": false, 00:14:00.031 "supported_io_types": { 00:14:00.031 "read": true, 00:14:00.031 "write": true, 00:14:00.031 "unmap": true, 00:14:00.031 "flush": true, 00:14:00.031 "reset": true, 00:14:00.031 "nvme_admin": false, 00:14:00.031 "nvme_io": false, 00:14:00.031 "nvme_io_md": false, 00:14:00.031 "write_zeroes": true, 00:14:00.031 "zcopy": false, 00:14:00.031 "get_zone_info": false, 00:14:00.031 "zone_management": false, 00:14:00.031 "zone_append": false, 00:14:00.031 "compare": false, 00:14:00.031 "compare_and_write": false, 00:14:00.031 "abort": false, 00:14:00.031 "seek_hole": false, 00:14:00.031 "seek_data": false, 00:14:00.031 "copy": false, 00:14:00.031 "nvme_iov_md": false 00:14:00.031 }, 00:14:00.031 "memory_domains": [ 00:14:00.031 { 00:14:00.031 "dma_device_id": "system", 00:14:00.031 "dma_device_type": 1 00:14:00.031 }, 00:14:00.031 { 00:14:00.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.031 "dma_device_type": 2 00:14:00.031 }, 00:14:00.031 { 00:14:00.031 "dma_device_id": "system", 00:14:00.031 "dma_device_type": 1 00:14:00.031 }, 00:14:00.031 { 00:14:00.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.031 "dma_device_type": 2 00:14:00.031 }, 00:14:00.031 { 00:14:00.031 "dma_device_id": "system", 00:14:00.031 "dma_device_type": 1 00:14:00.031 }, 00:14:00.031 { 00:14:00.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.031 "dma_device_type": 2 00:14:00.031 }, 00:14:00.031 { 00:14:00.031 "dma_device_id": "system", 00:14:00.031 "dma_device_type": 1 00:14:00.031 }, 00:14:00.031 { 00:14:00.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.031 "dma_device_type": 2 00:14:00.031 } 00:14:00.031 ], 00:14:00.031 "driver_specific": { 00:14:00.031 "raid": { 00:14:00.031 "uuid": "5c14bbda-c732-463d-8b64-6c1a27b4a140", 00:14:00.031 "strip_size_kb": 64, 00:14:00.031 "state": "online", 00:14:00.031 "raid_level": "raid0", 00:14:00.031 "superblock": false, 00:14:00.031 "num_base_bdevs": 4, 00:14:00.031 "num_base_bdevs_discovered": 4, 00:14:00.031 "num_base_bdevs_operational": 4, 00:14:00.031 "base_bdevs_list": [ 00:14:00.031 { 00:14:00.031 "name": "BaseBdev1", 00:14:00.031 "uuid": "d2a54d99-0d89-484e-a288-315ba53639e2", 00:14:00.031 "is_configured": true, 00:14:00.031 "data_offset": 0, 00:14:00.031 "data_size": 65536 00:14:00.031 }, 00:14:00.031 { 00:14:00.031 "name": "BaseBdev2", 00:14:00.031 "uuid": "f7a478f6-a9bb-49e4-a7fe-9441ab7b2877", 00:14:00.031 "is_configured": true, 00:14:00.031 "data_offset": 0, 00:14:00.031 "data_size": 65536 00:14:00.031 }, 00:14:00.031 { 00:14:00.031 "name": "BaseBdev3", 00:14:00.031 "uuid": "c97eb0dd-5a80-456b-8d5f-191f9d63a761", 00:14:00.031 "is_configured": true, 00:14:00.031 "data_offset": 0, 00:14:00.031 "data_size": 65536 00:14:00.031 }, 00:14:00.031 { 00:14:00.031 "name": "BaseBdev4", 00:14:00.031 "uuid": "88acc29f-45ca-4862-acf9-b8eab54fe88d", 00:14:00.031 "is_configured": true, 00:14:00.031 "data_offset": 0, 00:14:00.031 "data_size": 65536 00:14:00.031 } 00:14:00.031 ] 00:14:00.031 } 00:14:00.031 } 00:14:00.031 }' 00:14:00.031 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:00.031 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:00.031 BaseBdev2 00:14:00.031 BaseBdev3 00:14:00.031 BaseBdev4' 00:14:00.031 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.290 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.290 [2024-11-20 07:10:57.598627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.290 [2024-11-20 07:10:57.598812] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:00.290 [2024-11-20 07:10:57.599016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.549 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.549 "name": "Existed_Raid", 00:14:00.549 "uuid": "5c14bbda-c732-463d-8b64-6c1a27b4a140", 00:14:00.549 "strip_size_kb": 64, 00:14:00.549 "state": "offline", 00:14:00.549 "raid_level": "raid0", 00:14:00.549 "superblock": false, 00:14:00.549 "num_base_bdevs": 4, 00:14:00.549 "num_base_bdevs_discovered": 3, 00:14:00.550 "num_base_bdevs_operational": 3, 00:14:00.550 "base_bdevs_list": [ 00:14:00.550 { 00:14:00.550 "name": null, 00:14:00.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.550 "is_configured": false, 00:14:00.550 "data_offset": 0, 00:14:00.550 "data_size": 65536 00:14:00.550 }, 00:14:00.550 { 00:14:00.550 "name": "BaseBdev2", 00:14:00.550 "uuid": "f7a478f6-a9bb-49e4-a7fe-9441ab7b2877", 00:14:00.550 "is_configured": true, 00:14:00.550 "data_offset": 0, 00:14:00.550 "data_size": 65536 00:14:00.550 }, 00:14:00.550 { 00:14:00.550 "name": "BaseBdev3", 00:14:00.550 "uuid": "c97eb0dd-5a80-456b-8d5f-191f9d63a761", 00:14:00.550 "is_configured": true, 00:14:00.550 "data_offset": 0, 00:14:00.550 "data_size": 65536 00:14:00.550 }, 00:14:00.550 { 00:14:00.550 "name": "BaseBdev4", 00:14:00.550 "uuid": "88acc29f-45ca-4862-acf9-b8eab54fe88d", 00:14:00.550 "is_configured": true, 00:14:00.550 "data_offset": 0, 00:14:00.550 "data_size": 65536 00:14:00.550 } 00:14:00.550 ] 00:14:00.550 }' 00:14:00.550 07:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.550 07:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.116 [2024-11-20 07:10:58.272796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.116 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.116 [2024-11-20 07:10:58.427205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.374 [2024-11-20 07:10:58.573513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:01.374 [2024-11-20 07:10:58.573746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:01.374 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.375 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.375 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.375 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:01.375 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.633 BaseBdev2 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.633 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.633 [ 00:14:01.633 { 00:14:01.633 "name": "BaseBdev2", 00:14:01.633 "aliases": [ 00:14:01.633 "5114e692-5223-44f3-a49b-38764797b137" 00:14:01.633 ], 00:14:01.633 "product_name": "Malloc disk", 00:14:01.633 "block_size": 512, 00:14:01.633 "num_blocks": 65536, 00:14:01.633 "uuid": "5114e692-5223-44f3-a49b-38764797b137", 00:14:01.633 "assigned_rate_limits": { 00:14:01.633 "rw_ios_per_sec": 0, 00:14:01.633 "rw_mbytes_per_sec": 0, 00:14:01.633 "r_mbytes_per_sec": 0, 00:14:01.633 "w_mbytes_per_sec": 0 00:14:01.633 }, 00:14:01.633 "claimed": false, 00:14:01.633 "zoned": false, 00:14:01.633 "supported_io_types": { 00:14:01.633 "read": true, 00:14:01.633 "write": true, 00:14:01.633 "unmap": true, 00:14:01.633 "flush": true, 00:14:01.633 "reset": true, 00:14:01.633 "nvme_admin": false, 00:14:01.634 "nvme_io": false, 00:14:01.634 "nvme_io_md": false, 00:14:01.634 "write_zeroes": true, 00:14:01.634 "zcopy": true, 00:14:01.634 "get_zone_info": false, 00:14:01.634 "zone_management": false, 00:14:01.634 "zone_append": false, 00:14:01.634 "compare": false, 00:14:01.634 "compare_and_write": false, 00:14:01.634 "abort": true, 00:14:01.634 "seek_hole": false, 00:14:01.634 "seek_data": false, 00:14:01.634 "copy": true, 00:14:01.634 "nvme_iov_md": false 00:14:01.634 }, 00:14:01.634 "memory_domains": [ 00:14:01.634 { 00:14:01.634 "dma_device_id": "system", 00:14:01.634 "dma_device_type": 1 00:14:01.634 }, 00:14:01.634 { 00:14:01.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.634 "dma_device_type": 2 00:14:01.634 } 00:14:01.634 ], 00:14:01.634 "driver_specific": {} 00:14:01.634 } 00:14:01.634 ] 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.634 BaseBdev3 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.634 [ 00:14:01.634 { 00:14:01.634 "name": "BaseBdev3", 00:14:01.634 "aliases": [ 00:14:01.634 "d917cc2f-7652-4f31-be05-4e3e4cabda06" 00:14:01.634 ], 00:14:01.634 "product_name": "Malloc disk", 00:14:01.634 "block_size": 512, 00:14:01.634 "num_blocks": 65536, 00:14:01.634 "uuid": "d917cc2f-7652-4f31-be05-4e3e4cabda06", 00:14:01.634 "assigned_rate_limits": { 00:14:01.634 "rw_ios_per_sec": 0, 00:14:01.634 "rw_mbytes_per_sec": 0, 00:14:01.634 "r_mbytes_per_sec": 0, 00:14:01.634 "w_mbytes_per_sec": 0 00:14:01.634 }, 00:14:01.634 "claimed": false, 00:14:01.634 "zoned": false, 00:14:01.634 "supported_io_types": { 00:14:01.634 "read": true, 00:14:01.634 "write": true, 00:14:01.634 "unmap": true, 00:14:01.634 "flush": true, 00:14:01.634 "reset": true, 00:14:01.634 "nvme_admin": false, 00:14:01.634 "nvme_io": false, 00:14:01.634 "nvme_io_md": false, 00:14:01.634 "write_zeroes": true, 00:14:01.634 "zcopy": true, 00:14:01.634 "get_zone_info": false, 00:14:01.634 "zone_management": false, 00:14:01.634 "zone_append": false, 00:14:01.634 "compare": false, 00:14:01.634 "compare_and_write": false, 00:14:01.634 "abort": true, 00:14:01.634 "seek_hole": false, 00:14:01.634 "seek_data": false, 00:14:01.634 "copy": true, 00:14:01.634 "nvme_iov_md": false 00:14:01.634 }, 00:14:01.634 "memory_domains": [ 00:14:01.634 { 00:14:01.634 "dma_device_id": "system", 00:14:01.634 "dma_device_type": 1 00:14:01.634 }, 00:14:01.634 { 00:14:01.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.634 "dma_device_type": 2 00:14:01.634 } 00:14:01.634 ], 00:14:01.634 "driver_specific": {} 00:14:01.634 } 00:14:01.634 ] 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.634 BaseBdev4 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.634 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.634 [ 00:14:01.634 { 00:14:01.634 "name": "BaseBdev4", 00:14:01.634 "aliases": [ 00:14:01.634 "3aab7765-a721-4ed4-b853-7507ccdfbffb" 00:14:01.634 ], 00:14:01.634 "product_name": "Malloc disk", 00:14:01.634 "block_size": 512, 00:14:01.634 "num_blocks": 65536, 00:14:01.634 "uuid": "3aab7765-a721-4ed4-b853-7507ccdfbffb", 00:14:01.634 "assigned_rate_limits": { 00:14:01.634 "rw_ios_per_sec": 0, 00:14:01.634 "rw_mbytes_per_sec": 0, 00:14:01.634 "r_mbytes_per_sec": 0, 00:14:01.634 "w_mbytes_per_sec": 0 00:14:01.634 }, 00:14:01.634 "claimed": false, 00:14:01.634 "zoned": false, 00:14:01.634 "supported_io_types": { 00:14:01.634 "read": true, 00:14:01.634 "write": true, 00:14:01.634 "unmap": true, 00:14:01.634 "flush": true, 00:14:01.634 "reset": true, 00:14:01.634 "nvme_admin": false, 00:14:01.634 "nvme_io": false, 00:14:01.634 "nvme_io_md": false, 00:14:01.634 "write_zeroes": true, 00:14:01.634 "zcopy": true, 00:14:01.634 "get_zone_info": false, 00:14:01.634 "zone_management": false, 00:14:01.634 "zone_append": false, 00:14:01.634 "compare": false, 00:14:01.634 "compare_and_write": false, 00:14:01.634 "abort": true, 00:14:01.635 "seek_hole": false, 00:14:01.895 "seek_data": false, 00:14:01.895 "copy": true, 00:14:01.895 "nvme_iov_md": false 00:14:01.895 }, 00:14:01.895 "memory_domains": [ 00:14:01.895 { 00:14:01.895 "dma_device_id": "system", 00:14:01.895 "dma_device_type": 1 00:14:01.895 }, 00:14:01.895 { 00:14:01.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.895 "dma_device_type": 2 00:14:01.895 } 00:14:01.895 ], 00:14:01.895 "driver_specific": {} 00:14:01.895 } 00:14:01.895 ] 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.895 [2024-11-20 07:10:58.957449] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:01.895 [2024-11-20 07:10:58.957636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:01.895 [2024-11-20 07:10:58.957770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:01.895 [2024-11-20 07:10:58.960268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:01.895 [2024-11-20 07:10:58.960489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.895 07:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.895 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.895 "name": "Existed_Raid", 00:14:01.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.895 "strip_size_kb": 64, 00:14:01.895 "state": "configuring", 00:14:01.895 "raid_level": "raid0", 00:14:01.895 "superblock": false, 00:14:01.895 "num_base_bdevs": 4, 00:14:01.895 "num_base_bdevs_discovered": 3, 00:14:01.895 "num_base_bdevs_operational": 4, 00:14:01.895 "base_bdevs_list": [ 00:14:01.895 { 00:14:01.895 "name": "BaseBdev1", 00:14:01.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.895 "is_configured": false, 00:14:01.895 "data_offset": 0, 00:14:01.895 "data_size": 0 00:14:01.895 }, 00:14:01.895 { 00:14:01.895 "name": "BaseBdev2", 00:14:01.895 "uuid": "5114e692-5223-44f3-a49b-38764797b137", 00:14:01.895 "is_configured": true, 00:14:01.895 "data_offset": 0, 00:14:01.895 "data_size": 65536 00:14:01.895 }, 00:14:01.895 { 00:14:01.895 "name": "BaseBdev3", 00:14:01.895 "uuid": "d917cc2f-7652-4f31-be05-4e3e4cabda06", 00:14:01.895 "is_configured": true, 00:14:01.895 "data_offset": 0, 00:14:01.895 "data_size": 65536 00:14:01.895 }, 00:14:01.895 { 00:14:01.895 "name": "BaseBdev4", 00:14:01.895 "uuid": "3aab7765-a721-4ed4-b853-7507ccdfbffb", 00:14:01.895 "is_configured": true, 00:14:01.895 "data_offset": 0, 00:14:01.895 "data_size": 65536 00:14:01.895 } 00:14:01.895 ] 00:14:01.895 }' 00:14:01.895 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.895 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.154 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:02.154 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.154 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.431 [2024-11-20 07:10:59.477631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.431 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.432 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.432 "name": "Existed_Raid", 00:14:02.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.432 "strip_size_kb": 64, 00:14:02.432 "state": "configuring", 00:14:02.432 "raid_level": "raid0", 00:14:02.432 "superblock": false, 00:14:02.432 "num_base_bdevs": 4, 00:14:02.432 "num_base_bdevs_discovered": 2, 00:14:02.432 "num_base_bdevs_operational": 4, 00:14:02.432 "base_bdevs_list": [ 00:14:02.432 { 00:14:02.432 "name": "BaseBdev1", 00:14:02.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.432 "is_configured": false, 00:14:02.432 "data_offset": 0, 00:14:02.432 "data_size": 0 00:14:02.432 }, 00:14:02.432 { 00:14:02.432 "name": null, 00:14:02.432 "uuid": "5114e692-5223-44f3-a49b-38764797b137", 00:14:02.432 "is_configured": false, 00:14:02.432 "data_offset": 0, 00:14:02.432 "data_size": 65536 00:14:02.432 }, 00:14:02.432 { 00:14:02.432 "name": "BaseBdev3", 00:14:02.432 "uuid": "d917cc2f-7652-4f31-be05-4e3e4cabda06", 00:14:02.432 "is_configured": true, 00:14:02.432 "data_offset": 0, 00:14:02.432 "data_size": 65536 00:14:02.432 }, 00:14:02.432 { 00:14:02.432 "name": "BaseBdev4", 00:14:02.432 "uuid": "3aab7765-a721-4ed4-b853-7507ccdfbffb", 00:14:02.432 "is_configured": true, 00:14:02.432 "data_offset": 0, 00:14:02.432 "data_size": 65536 00:14:02.432 } 00:14:02.432 ] 00:14:02.432 }' 00:14:02.432 07:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.432 07:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.715 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.715 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.715 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.715 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:02.715 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.975 [2024-11-20 07:11:00.096627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.975 BaseBdev1 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.975 [ 00:14:02.975 { 00:14:02.975 "name": "BaseBdev1", 00:14:02.975 "aliases": [ 00:14:02.975 "5675cc15-ac7b-4ae7-8b89-a75efc15fd0a" 00:14:02.975 ], 00:14:02.975 "product_name": "Malloc disk", 00:14:02.975 "block_size": 512, 00:14:02.975 "num_blocks": 65536, 00:14:02.975 "uuid": "5675cc15-ac7b-4ae7-8b89-a75efc15fd0a", 00:14:02.975 "assigned_rate_limits": { 00:14:02.975 "rw_ios_per_sec": 0, 00:14:02.975 "rw_mbytes_per_sec": 0, 00:14:02.975 "r_mbytes_per_sec": 0, 00:14:02.975 "w_mbytes_per_sec": 0 00:14:02.975 }, 00:14:02.975 "claimed": true, 00:14:02.975 "claim_type": "exclusive_write", 00:14:02.975 "zoned": false, 00:14:02.975 "supported_io_types": { 00:14:02.975 "read": true, 00:14:02.975 "write": true, 00:14:02.975 "unmap": true, 00:14:02.975 "flush": true, 00:14:02.975 "reset": true, 00:14:02.975 "nvme_admin": false, 00:14:02.975 "nvme_io": false, 00:14:02.975 "nvme_io_md": false, 00:14:02.975 "write_zeroes": true, 00:14:02.975 "zcopy": true, 00:14:02.975 "get_zone_info": false, 00:14:02.975 "zone_management": false, 00:14:02.975 "zone_append": false, 00:14:02.975 "compare": false, 00:14:02.975 "compare_and_write": false, 00:14:02.975 "abort": true, 00:14:02.975 "seek_hole": false, 00:14:02.975 "seek_data": false, 00:14:02.975 "copy": true, 00:14:02.975 "nvme_iov_md": false 00:14:02.975 }, 00:14:02.975 "memory_domains": [ 00:14:02.975 { 00:14:02.975 "dma_device_id": "system", 00:14:02.975 "dma_device_type": 1 00:14:02.975 }, 00:14:02.975 { 00:14:02.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.975 "dma_device_type": 2 00:14:02.975 } 00:14:02.975 ], 00:14:02.975 "driver_specific": {} 00:14:02.975 } 00:14:02.975 ] 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.975 "name": "Existed_Raid", 00:14:02.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.975 "strip_size_kb": 64, 00:14:02.975 "state": "configuring", 00:14:02.975 "raid_level": "raid0", 00:14:02.975 "superblock": false, 00:14:02.975 "num_base_bdevs": 4, 00:14:02.975 "num_base_bdevs_discovered": 3, 00:14:02.975 "num_base_bdevs_operational": 4, 00:14:02.975 "base_bdevs_list": [ 00:14:02.975 { 00:14:02.975 "name": "BaseBdev1", 00:14:02.975 "uuid": "5675cc15-ac7b-4ae7-8b89-a75efc15fd0a", 00:14:02.975 "is_configured": true, 00:14:02.975 "data_offset": 0, 00:14:02.975 "data_size": 65536 00:14:02.975 }, 00:14:02.975 { 00:14:02.975 "name": null, 00:14:02.975 "uuid": "5114e692-5223-44f3-a49b-38764797b137", 00:14:02.975 "is_configured": false, 00:14:02.975 "data_offset": 0, 00:14:02.975 "data_size": 65536 00:14:02.975 }, 00:14:02.975 { 00:14:02.975 "name": "BaseBdev3", 00:14:02.975 "uuid": "d917cc2f-7652-4f31-be05-4e3e4cabda06", 00:14:02.975 "is_configured": true, 00:14:02.975 "data_offset": 0, 00:14:02.975 "data_size": 65536 00:14:02.975 }, 00:14:02.975 { 00:14:02.975 "name": "BaseBdev4", 00:14:02.975 "uuid": "3aab7765-a721-4ed4-b853-7507ccdfbffb", 00:14:02.975 "is_configured": true, 00:14:02.975 "data_offset": 0, 00:14:02.975 "data_size": 65536 00:14:02.975 } 00:14:02.975 ] 00:14:02.975 }' 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.975 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.542 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:03.542 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.542 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.542 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.542 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.542 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:03.542 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.543 [2024-11-20 07:11:00.696988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:03.543 "name": "Existed_Raid", 00:14:03.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.543 "strip_size_kb": 64, 00:14:03.543 "state": "configuring", 00:14:03.543 "raid_level": "raid0", 00:14:03.543 "superblock": false, 00:14:03.543 "num_base_bdevs": 4, 00:14:03.543 "num_base_bdevs_discovered": 2, 00:14:03.543 "num_base_bdevs_operational": 4, 00:14:03.543 "base_bdevs_list": [ 00:14:03.543 { 00:14:03.543 "name": "BaseBdev1", 00:14:03.543 "uuid": "5675cc15-ac7b-4ae7-8b89-a75efc15fd0a", 00:14:03.543 "is_configured": true, 00:14:03.543 "data_offset": 0, 00:14:03.543 "data_size": 65536 00:14:03.543 }, 00:14:03.543 { 00:14:03.543 "name": null, 00:14:03.543 "uuid": "5114e692-5223-44f3-a49b-38764797b137", 00:14:03.543 "is_configured": false, 00:14:03.543 "data_offset": 0, 00:14:03.543 "data_size": 65536 00:14:03.543 }, 00:14:03.543 { 00:14:03.543 "name": null, 00:14:03.543 "uuid": "d917cc2f-7652-4f31-be05-4e3e4cabda06", 00:14:03.543 "is_configured": false, 00:14:03.543 "data_offset": 0, 00:14:03.543 "data_size": 65536 00:14:03.543 }, 00:14:03.543 { 00:14:03.543 "name": "BaseBdev4", 00:14:03.543 "uuid": "3aab7765-a721-4ed4-b853-7507ccdfbffb", 00:14:03.543 "is_configured": true, 00:14:03.543 "data_offset": 0, 00:14:03.543 "data_size": 65536 00:14:03.543 } 00:14:03.543 ] 00:14:03.543 }' 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:03.543 07:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.111 [2024-11-20 07:11:01.277135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.111 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.111 "name": "Existed_Raid", 00:14:04.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.111 "strip_size_kb": 64, 00:14:04.111 "state": "configuring", 00:14:04.111 "raid_level": "raid0", 00:14:04.111 "superblock": false, 00:14:04.111 "num_base_bdevs": 4, 00:14:04.111 "num_base_bdevs_discovered": 3, 00:14:04.111 "num_base_bdevs_operational": 4, 00:14:04.111 "base_bdevs_list": [ 00:14:04.111 { 00:14:04.111 "name": "BaseBdev1", 00:14:04.112 "uuid": "5675cc15-ac7b-4ae7-8b89-a75efc15fd0a", 00:14:04.112 "is_configured": true, 00:14:04.112 "data_offset": 0, 00:14:04.112 "data_size": 65536 00:14:04.112 }, 00:14:04.112 { 00:14:04.112 "name": null, 00:14:04.112 "uuid": "5114e692-5223-44f3-a49b-38764797b137", 00:14:04.112 "is_configured": false, 00:14:04.112 "data_offset": 0, 00:14:04.112 "data_size": 65536 00:14:04.112 }, 00:14:04.112 { 00:14:04.112 "name": "BaseBdev3", 00:14:04.112 "uuid": "d917cc2f-7652-4f31-be05-4e3e4cabda06", 00:14:04.112 "is_configured": true, 00:14:04.112 "data_offset": 0, 00:14:04.112 "data_size": 65536 00:14:04.112 }, 00:14:04.112 { 00:14:04.112 "name": "BaseBdev4", 00:14:04.112 "uuid": "3aab7765-a721-4ed4-b853-7507ccdfbffb", 00:14:04.112 "is_configured": true, 00:14:04.112 "data_offset": 0, 00:14:04.112 "data_size": 65536 00:14:04.112 } 00:14:04.112 ] 00:14:04.112 }' 00:14:04.112 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.112 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.678 [2024-11-20 07:11:01.885395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.678 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.937 07:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.937 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.937 "name": "Existed_Raid", 00:14:04.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.937 "strip_size_kb": 64, 00:14:04.937 "state": "configuring", 00:14:04.937 "raid_level": "raid0", 00:14:04.937 "superblock": false, 00:14:04.937 "num_base_bdevs": 4, 00:14:04.937 "num_base_bdevs_discovered": 2, 00:14:04.937 "num_base_bdevs_operational": 4, 00:14:04.937 "base_bdevs_list": [ 00:14:04.937 { 00:14:04.937 "name": null, 00:14:04.937 "uuid": "5675cc15-ac7b-4ae7-8b89-a75efc15fd0a", 00:14:04.937 "is_configured": false, 00:14:04.937 "data_offset": 0, 00:14:04.937 "data_size": 65536 00:14:04.937 }, 00:14:04.937 { 00:14:04.937 "name": null, 00:14:04.937 "uuid": "5114e692-5223-44f3-a49b-38764797b137", 00:14:04.937 "is_configured": false, 00:14:04.937 "data_offset": 0, 00:14:04.937 "data_size": 65536 00:14:04.937 }, 00:14:04.937 { 00:14:04.937 "name": "BaseBdev3", 00:14:04.937 "uuid": "d917cc2f-7652-4f31-be05-4e3e4cabda06", 00:14:04.937 "is_configured": true, 00:14:04.937 "data_offset": 0, 00:14:04.937 "data_size": 65536 00:14:04.937 }, 00:14:04.937 { 00:14:04.937 "name": "BaseBdev4", 00:14:04.937 "uuid": "3aab7765-a721-4ed4-b853-7507ccdfbffb", 00:14:04.937 "is_configured": true, 00:14:04.937 "data_offset": 0, 00:14:04.937 "data_size": 65536 00:14:04.937 } 00:14:04.937 ] 00:14:04.937 }' 00:14:04.937 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.937 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.196 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.196 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:05.196 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.196 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.454 [2024-11-20 07:11:02.552268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.454 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.455 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.455 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.455 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.455 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.455 "name": "Existed_Raid", 00:14:05.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.455 "strip_size_kb": 64, 00:14:05.455 "state": "configuring", 00:14:05.455 "raid_level": "raid0", 00:14:05.455 "superblock": false, 00:14:05.455 "num_base_bdevs": 4, 00:14:05.455 "num_base_bdevs_discovered": 3, 00:14:05.455 "num_base_bdevs_operational": 4, 00:14:05.455 "base_bdevs_list": [ 00:14:05.455 { 00:14:05.455 "name": null, 00:14:05.455 "uuid": "5675cc15-ac7b-4ae7-8b89-a75efc15fd0a", 00:14:05.455 "is_configured": false, 00:14:05.455 "data_offset": 0, 00:14:05.455 "data_size": 65536 00:14:05.455 }, 00:14:05.455 { 00:14:05.455 "name": "BaseBdev2", 00:14:05.455 "uuid": "5114e692-5223-44f3-a49b-38764797b137", 00:14:05.455 "is_configured": true, 00:14:05.455 "data_offset": 0, 00:14:05.455 "data_size": 65536 00:14:05.455 }, 00:14:05.455 { 00:14:05.455 "name": "BaseBdev3", 00:14:05.455 "uuid": "d917cc2f-7652-4f31-be05-4e3e4cabda06", 00:14:05.455 "is_configured": true, 00:14:05.455 "data_offset": 0, 00:14:05.455 "data_size": 65536 00:14:05.455 }, 00:14:05.455 { 00:14:05.455 "name": "BaseBdev4", 00:14:05.455 "uuid": "3aab7765-a721-4ed4-b853-7507ccdfbffb", 00:14:05.455 "is_configured": true, 00:14:05.455 "data_offset": 0, 00:14:05.455 "data_size": 65536 00:14:05.455 } 00:14:05.455 ] 00:14:05.455 }' 00:14:05.455 07:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.455 07:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5675cc15-ac7b-4ae7-8b89-a75efc15fd0a 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.023 [2024-11-20 07:11:03.246044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:06.023 [2024-11-20 07:11:03.246252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:06.023 [2024-11-20 07:11:03.246276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:06.023 [2024-11-20 07:11:03.246613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:06.023 [2024-11-20 07:11:03.246809] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:06.023 [2024-11-20 07:11:03.246840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:06.023 [2024-11-20 07:11:03.247150] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.023 NewBaseBdev 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:06.023 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.024 [ 00:14:06.024 { 00:14:06.024 "name": "NewBaseBdev", 00:14:06.024 "aliases": [ 00:14:06.024 "5675cc15-ac7b-4ae7-8b89-a75efc15fd0a" 00:14:06.024 ], 00:14:06.024 "product_name": "Malloc disk", 00:14:06.024 "block_size": 512, 00:14:06.024 "num_blocks": 65536, 00:14:06.024 "uuid": "5675cc15-ac7b-4ae7-8b89-a75efc15fd0a", 00:14:06.024 "assigned_rate_limits": { 00:14:06.024 "rw_ios_per_sec": 0, 00:14:06.024 "rw_mbytes_per_sec": 0, 00:14:06.024 "r_mbytes_per_sec": 0, 00:14:06.024 "w_mbytes_per_sec": 0 00:14:06.024 }, 00:14:06.024 "claimed": true, 00:14:06.024 "claim_type": "exclusive_write", 00:14:06.024 "zoned": false, 00:14:06.024 "supported_io_types": { 00:14:06.024 "read": true, 00:14:06.024 "write": true, 00:14:06.024 "unmap": true, 00:14:06.024 "flush": true, 00:14:06.024 "reset": true, 00:14:06.024 "nvme_admin": false, 00:14:06.024 "nvme_io": false, 00:14:06.024 "nvme_io_md": false, 00:14:06.024 "write_zeroes": true, 00:14:06.024 "zcopy": true, 00:14:06.024 "get_zone_info": false, 00:14:06.024 "zone_management": false, 00:14:06.024 "zone_append": false, 00:14:06.024 "compare": false, 00:14:06.024 "compare_and_write": false, 00:14:06.024 "abort": true, 00:14:06.024 "seek_hole": false, 00:14:06.024 "seek_data": false, 00:14:06.024 "copy": true, 00:14:06.024 "nvme_iov_md": false 00:14:06.024 }, 00:14:06.024 "memory_domains": [ 00:14:06.024 { 00:14:06.024 "dma_device_id": "system", 00:14:06.024 "dma_device_type": 1 00:14:06.024 }, 00:14:06.024 { 00:14:06.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.024 "dma_device_type": 2 00:14:06.024 } 00:14:06.024 ], 00:14:06.024 "driver_specific": {} 00:14:06.024 } 00:14:06.024 ] 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.024 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.025 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.025 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.025 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.025 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.025 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.025 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.025 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.025 "name": "Existed_Raid", 00:14:06.025 "uuid": "4ed3ca93-b079-46c7-b097-9c844ce69cf0", 00:14:06.025 "strip_size_kb": 64, 00:14:06.025 "state": "online", 00:14:06.025 "raid_level": "raid0", 00:14:06.025 "superblock": false, 00:14:06.025 "num_base_bdevs": 4, 00:14:06.025 "num_base_bdevs_discovered": 4, 00:14:06.025 "num_base_bdevs_operational": 4, 00:14:06.025 "base_bdevs_list": [ 00:14:06.025 { 00:14:06.025 "name": "NewBaseBdev", 00:14:06.025 "uuid": "5675cc15-ac7b-4ae7-8b89-a75efc15fd0a", 00:14:06.025 "is_configured": true, 00:14:06.025 "data_offset": 0, 00:14:06.025 "data_size": 65536 00:14:06.025 }, 00:14:06.025 { 00:14:06.025 "name": "BaseBdev2", 00:14:06.025 "uuid": "5114e692-5223-44f3-a49b-38764797b137", 00:14:06.025 "is_configured": true, 00:14:06.025 "data_offset": 0, 00:14:06.025 "data_size": 65536 00:14:06.025 }, 00:14:06.025 { 00:14:06.025 "name": "BaseBdev3", 00:14:06.025 "uuid": "d917cc2f-7652-4f31-be05-4e3e4cabda06", 00:14:06.025 "is_configured": true, 00:14:06.025 "data_offset": 0, 00:14:06.025 "data_size": 65536 00:14:06.025 }, 00:14:06.025 { 00:14:06.025 "name": "BaseBdev4", 00:14:06.025 "uuid": "3aab7765-a721-4ed4-b853-7507ccdfbffb", 00:14:06.025 "is_configured": true, 00:14:06.025 "data_offset": 0, 00:14:06.025 "data_size": 65536 00:14:06.025 } 00:14:06.025 ] 00:14:06.025 }' 00:14:06.025 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.025 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.593 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:06.593 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:06.593 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:06.593 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:06.593 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:06.593 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:06.593 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:06.593 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:06.593 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.593 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.593 [2024-11-20 07:11:03.842744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.593 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.593 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:06.593 "name": "Existed_Raid", 00:14:06.593 "aliases": [ 00:14:06.593 "4ed3ca93-b079-46c7-b097-9c844ce69cf0" 00:14:06.593 ], 00:14:06.593 "product_name": "Raid Volume", 00:14:06.593 "block_size": 512, 00:14:06.593 "num_blocks": 262144, 00:14:06.593 "uuid": "4ed3ca93-b079-46c7-b097-9c844ce69cf0", 00:14:06.593 "assigned_rate_limits": { 00:14:06.593 "rw_ios_per_sec": 0, 00:14:06.593 "rw_mbytes_per_sec": 0, 00:14:06.593 "r_mbytes_per_sec": 0, 00:14:06.593 "w_mbytes_per_sec": 0 00:14:06.593 }, 00:14:06.593 "claimed": false, 00:14:06.593 "zoned": false, 00:14:06.593 "supported_io_types": { 00:14:06.593 "read": true, 00:14:06.593 "write": true, 00:14:06.593 "unmap": true, 00:14:06.593 "flush": true, 00:14:06.593 "reset": true, 00:14:06.593 "nvme_admin": false, 00:14:06.593 "nvme_io": false, 00:14:06.593 "nvme_io_md": false, 00:14:06.593 "write_zeroes": true, 00:14:06.593 "zcopy": false, 00:14:06.593 "get_zone_info": false, 00:14:06.593 "zone_management": false, 00:14:06.593 "zone_append": false, 00:14:06.593 "compare": false, 00:14:06.593 "compare_and_write": false, 00:14:06.593 "abort": false, 00:14:06.593 "seek_hole": false, 00:14:06.593 "seek_data": false, 00:14:06.593 "copy": false, 00:14:06.593 "nvme_iov_md": false 00:14:06.593 }, 00:14:06.593 "memory_domains": [ 00:14:06.593 { 00:14:06.593 "dma_device_id": "system", 00:14:06.593 "dma_device_type": 1 00:14:06.593 }, 00:14:06.593 { 00:14:06.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.593 "dma_device_type": 2 00:14:06.593 }, 00:14:06.593 { 00:14:06.593 "dma_device_id": "system", 00:14:06.593 "dma_device_type": 1 00:14:06.593 }, 00:14:06.593 { 00:14:06.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.593 "dma_device_type": 2 00:14:06.593 }, 00:14:06.593 { 00:14:06.593 "dma_device_id": "system", 00:14:06.593 "dma_device_type": 1 00:14:06.593 }, 00:14:06.593 { 00:14:06.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.593 "dma_device_type": 2 00:14:06.593 }, 00:14:06.593 { 00:14:06.593 "dma_device_id": "system", 00:14:06.593 "dma_device_type": 1 00:14:06.593 }, 00:14:06.593 { 00:14:06.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:06.593 "dma_device_type": 2 00:14:06.593 } 00:14:06.593 ], 00:14:06.593 "driver_specific": { 00:14:06.593 "raid": { 00:14:06.593 "uuid": "4ed3ca93-b079-46c7-b097-9c844ce69cf0", 00:14:06.593 "strip_size_kb": 64, 00:14:06.593 "state": "online", 00:14:06.593 "raid_level": "raid0", 00:14:06.593 "superblock": false, 00:14:06.593 "num_base_bdevs": 4, 00:14:06.593 "num_base_bdevs_discovered": 4, 00:14:06.593 "num_base_bdevs_operational": 4, 00:14:06.593 "base_bdevs_list": [ 00:14:06.593 { 00:14:06.593 "name": "NewBaseBdev", 00:14:06.593 "uuid": "5675cc15-ac7b-4ae7-8b89-a75efc15fd0a", 00:14:06.593 "is_configured": true, 00:14:06.593 "data_offset": 0, 00:14:06.593 "data_size": 65536 00:14:06.593 }, 00:14:06.593 { 00:14:06.593 "name": "BaseBdev2", 00:14:06.593 "uuid": "5114e692-5223-44f3-a49b-38764797b137", 00:14:06.593 "is_configured": true, 00:14:06.593 "data_offset": 0, 00:14:06.593 "data_size": 65536 00:14:06.593 }, 00:14:06.593 { 00:14:06.593 "name": "BaseBdev3", 00:14:06.593 "uuid": "d917cc2f-7652-4f31-be05-4e3e4cabda06", 00:14:06.593 "is_configured": true, 00:14:06.593 "data_offset": 0, 00:14:06.593 "data_size": 65536 00:14:06.593 }, 00:14:06.593 { 00:14:06.593 "name": "BaseBdev4", 00:14:06.593 "uuid": "3aab7765-a721-4ed4-b853-7507ccdfbffb", 00:14:06.593 "is_configured": true, 00:14:06.593 "data_offset": 0, 00:14:06.593 "data_size": 65536 00:14:06.593 } 00:14:06.593 ] 00:14:06.593 } 00:14:06.593 } 00:14:06.593 }' 00:14:06.593 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:06.852 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:06.853 BaseBdev2 00:14:06.853 BaseBdev3 00:14:06.853 BaseBdev4' 00:14:06.853 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.853 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:06.853 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.853 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:06.853 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.853 07:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.853 07:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:06.853 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.112 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.112 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:07.112 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:07.112 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.113 [2024-11-20 07:11:04.222401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:07.113 [2024-11-20 07:11:04.222603] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.113 [2024-11-20 07:11:04.222805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.113 [2024-11-20 07:11:04.223014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.113 [2024-11-20 07:11:04.223129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69385 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69385 ']' 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69385 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69385 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:07.113 killing process with pid 69385 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69385' 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69385 00:14:07.113 [2024-11-20 07:11:04.264909] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.113 07:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69385 00:14:07.372 [2024-11-20 07:11:04.622176] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.748 07:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:08.748 00:14:08.748 real 0m13.059s 00:14:08.748 user 0m21.823s 00:14:08.748 sys 0m1.735s 00:14:08.748 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.748 07:11:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.748 ************************************ 00:14:08.748 END TEST raid_state_function_test 00:14:08.748 ************************************ 00:14:08.748 07:11:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:14:08.748 07:11:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:08.748 07:11:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.748 07:11:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:08.748 ************************************ 00:14:08.748 START TEST raid_state_function_test_sb 00:14:08.748 ************************************ 00:14:08.748 07:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70073 00:14:08.749 Process raid pid: 70073 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70073' 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70073 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70073 ']' 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.749 07:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.749 [2024-11-20 07:11:05.795860] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:14:08.749 [2024-11-20 07:11:05.796061] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.749 [2024-11-20 07:11:05.976444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.008 [2024-11-20 07:11:06.107999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.008 [2024-11-20 07:11:06.314368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.008 [2024-11-20 07:11:06.314425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.574 [2024-11-20 07:11:06.767465] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:09.574 [2024-11-20 07:11:06.767543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:09.574 [2024-11-20 07:11:06.767560] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.574 [2024-11-20 07:11:06.767576] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.574 [2024-11-20 07:11:06.767586] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:09.574 [2024-11-20 07:11:06.767600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:09.574 [2024-11-20 07:11:06.767610] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:09.574 [2024-11-20 07:11:06.767624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.574 "name": "Existed_Raid", 00:14:09.574 "uuid": "1936651c-c76d-4ad1-b023-654e720ad80d", 00:14:09.574 "strip_size_kb": 64, 00:14:09.574 "state": "configuring", 00:14:09.574 "raid_level": "raid0", 00:14:09.574 "superblock": true, 00:14:09.574 "num_base_bdevs": 4, 00:14:09.574 "num_base_bdevs_discovered": 0, 00:14:09.574 "num_base_bdevs_operational": 4, 00:14:09.574 "base_bdevs_list": [ 00:14:09.574 { 00:14:09.574 "name": "BaseBdev1", 00:14:09.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.574 "is_configured": false, 00:14:09.574 "data_offset": 0, 00:14:09.574 "data_size": 0 00:14:09.574 }, 00:14:09.574 { 00:14:09.574 "name": "BaseBdev2", 00:14:09.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.574 "is_configured": false, 00:14:09.574 "data_offset": 0, 00:14:09.574 "data_size": 0 00:14:09.574 }, 00:14:09.574 { 00:14:09.574 "name": "BaseBdev3", 00:14:09.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.574 "is_configured": false, 00:14:09.574 "data_offset": 0, 00:14:09.574 "data_size": 0 00:14:09.574 }, 00:14:09.574 { 00:14:09.574 "name": "BaseBdev4", 00:14:09.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.574 "is_configured": false, 00:14:09.574 "data_offset": 0, 00:14:09.574 "data_size": 0 00:14:09.574 } 00:14:09.574 ] 00:14:09.574 }' 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.574 07:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.152 [2024-11-20 07:11:07.295561] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:10.152 [2024-11-20 07:11:07.295626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.152 [2024-11-20 07:11:07.303561] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:10.152 [2024-11-20 07:11:07.303646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:10.152 [2024-11-20 07:11:07.303678] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.152 [2024-11-20 07:11:07.303694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.152 [2024-11-20 07:11:07.303715] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:10.152 [2024-11-20 07:11:07.303733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:10.152 [2024-11-20 07:11:07.303743] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:10.152 [2024-11-20 07:11:07.303767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.152 [2024-11-20 07:11:07.349173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.152 BaseBdev1 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.152 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.152 [ 00:14:10.152 { 00:14:10.152 "name": "BaseBdev1", 00:14:10.152 "aliases": [ 00:14:10.152 "0bdc2266-c051-43e5-8d14-f82fdf514797" 00:14:10.152 ], 00:14:10.152 "product_name": "Malloc disk", 00:14:10.152 "block_size": 512, 00:14:10.152 "num_blocks": 65536, 00:14:10.152 "uuid": "0bdc2266-c051-43e5-8d14-f82fdf514797", 00:14:10.152 "assigned_rate_limits": { 00:14:10.152 "rw_ios_per_sec": 0, 00:14:10.152 "rw_mbytes_per_sec": 0, 00:14:10.152 "r_mbytes_per_sec": 0, 00:14:10.152 "w_mbytes_per_sec": 0 00:14:10.152 }, 00:14:10.152 "claimed": true, 00:14:10.152 "claim_type": "exclusive_write", 00:14:10.152 "zoned": false, 00:14:10.152 "supported_io_types": { 00:14:10.152 "read": true, 00:14:10.152 "write": true, 00:14:10.152 "unmap": true, 00:14:10.152 "flush": true, 00:14:10.152 "reset": true, 00:14:10.152 "nvme_admin": false, 00:14:10.152 "nvme_io": false, 00:14:10.152 "nvme_io_md": false, 00:14:10.152 "write_zeroes": true, 00:14:10.152 "zcopy": true, 00:14:10.152 "get_zone_info": false, 00:14:10.152 "zone_management": false, 00:14:10.152 "zone_append": false, 00:14:10.152 "compare": false, 00:14:10.152 "compare_and_write": false, 00:14:10.152 "abort": true, 00:14:10.152 "seek_hole": false, 00:14:10.152 "seek_data": false, 00:14:10.152 "copy": true, 00:14:10.152 "nvme_iov_md": false 00:14:10.152 }, 00:14:10.152 "memory_domains": [ 00:14:10.152 { 00:14:10.152 "dma_device_id": "system", 00:14:10.152 "dma_device_type": 1 00:14:10.153 }, 00:14:10.153 { 00:14:10.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.153 "dma_device_type": 2 00:14:10.153 } 00:14:10.153 ], 00:14:10.153 "driver_specific": {} 00:14:10.153 } 00:14:10.153 ] 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.153 "name": "Existed_Raid", 00:14:10.153 "uuid": "fd343e5e-a4f4-4ae6-8749-b80e9b7525ab", 00:14:10.153 "strip_size_kb": 64, 00:14:10.153 "state": "configuring", 00:14:10.153 "raid_level": "raid0", 00:14:10.153 "superblock": true, 00:14:10.153 "num_base_bdevs": 4, 00:14:10.153 "num_base_bdevs_discovered": 1, 00:14:10.153 "num_base_bdevs_operational": 4, 00:14:10.153 "base_bdevs_list": [ 00:14:10.153 { 00:14:10.153 "name": "BaseBdev1", 00:14:10.153 "uuid": "0bdc2266-c051-43e5-8d14-f82fdf514797", 00:14:10.153 "is_configured": true, 00:14:10.153 "data_offset": 2048, 00:14:10.153 "data_size": 63488 00:14:10.153 }, 00:14:10.153 { 00:14:10.153 "name": "BaseBdev2", 00:14:10.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.153 "is_configured": false, 00:14:10.153 "data_offset": 0, 00:14:10.153 "data_size": 0 00:14:10.153 }, 00:14:10.153 { 00:14:10.153 "name": "BaseBdev3", 00:14:10.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.153 "is_configured": false, 00:14:10.153 "data_offset": 0, 00:14:10.153 "data_size": 0 00:14:10.153 }, 00:14:10.153 { 00:14:10.153 "name": "BaseBdev4", 00:14:10.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.153 "is_configured": false, 00:14:10.153 "data_offset": 0, 00:14:10.153 "data_size": 0 00:14:10.153 } 00:14:10.153 ] 00:14:10.153 }' 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.153 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.718 [2024-11-20 07:11:07.929373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:10.718 [2024-11-20 07:11:07.929456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.718 [2024-11-20 07:11:07.937442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.718 [2024-11-20 07:11:07.940004] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.718 [2024-11-20 07:11:07.940064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.718 [2024-11-20 07:11:07.940081] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:10.718 [2024-11-20 07:11:07.940099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:10.718 [2024-11-20 07:11:07.940109] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:10.718 [2024-11-20 07:11:07.940124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.718 "name": "Existed_Raid", 00:14:10.718 "uuid": "bda13276-0f28-44ef-8f90-08b412a065c9", 00:14:10.718 "strip_size_kb": 64, 00:14:10.718 "state": "configuring", 00:14:10.718 "raid_level": "raid0", 00:14:10.718 "superblock": true, 00:14:10.718 "num_base_bdevs": 4, 00:14:10.718 "num_base_bdevs_discovered": 1, 00:14:10.718 "num_base_bdevs_operational": 4, 00:14:10.718 "base_bdevs_list": [ 00:14:10.718 { 00:14:10.718 "name": "BaseBdev1", 00:14:10.718 "uuid": "0bdc2266-c051-43e5-8d14-f82fdf514797", 00:14:10.718 "is_configured": true, 00:14:10.718 "data_offset": 2048, 00:14:10.718 "data_size": 63488 00:14:10.718 }, 00:14:10.718 { 00:14:10.718 "name": "BaseBdev2", 00:14:10.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.718 "is_configured": false, 00:14:10.718 "data_offset": 0, 00:14:10.718 "data_size": 0 00:14:10.718 }, 00:14:10.718 { 00:14:10.718 "name": "BaseBdev3", 00:14:10.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.718 "is_configured": false, 00:14:10.718 "data_offset": 0, 00:14:10.718 "data_size": 0 00:14:10.718 }, 00:14:10.718 { 00:14:10.718 "name": "BaseBdev4", 00:14:10.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.718 "is_configured": false, 00:14:10.718 "data_offset": 0, 00:14:10.718 "data_size": 0 00:14:10.718 } 00:14:10.718 ] 00:14:10.718 }' 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.718 07:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.283 [2024-11-20 07:11:08.507524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.283 BaseBdev2 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.283 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.283 [ 00:14:11.283 { 00:14:11.283 "name": "BaseBdev2", 00:14:11.283 "aliases": [ 00:14:11.283 "97167269-9fd0-4792-b814-798d900b175b" 00:14:11.283 ], 00:14:11.283 "product_name": "Malloc disk", 00:14:11.283 "block_size": 512, 00:14:11.283 "num_blocks": 65536, 00:14:11.283 "uuid": "97167269-9fd0-4792-b814-798d900b175b", 00:14:11.283 "assigned_rate_limits": { 00:14:11.283 "rw_ios_per_sec": 0, 00:14:11.283 "rw_mbytes_per_sec": 0, 00:14:11.283 "r_mbytes_per_sec": 0, 00:14:11.283 "w_mbytes_per_sec": 0 00:14:11.283 }, 00:14:11.283 "claimed": true, 00:14:11.283 "claim_type": "exclusive_write", 00:14:11.283 "zoned": false, 00:14:11.283 "supported_io_types": { 00:14:11.283 "read": true, 00:14:11.283 "write": true, 00:14:11.283 "unmap": true, 00:14:11.283 "flush": true, 00:14:11.283 "reset": true, 00:14:11.283 "nvme_admin": false, 00:14:11.283 "nvme_io": false, 00:14:11.283 "nvme_io_md": false, 00:14:11.283 "write_zeroes": true, 00:14:11.283 "zcopy": true, 00:14:11.283 "get_zone_info": false, 00:14:11.283 "zone_management": false, 00:14:11.283 "zone_append": false, 00:14:11.283 "compare": false, 00:14:11.283 "compare_and_write": false, 00:14:11.283 "abort": true, 00:14:11.283 "seek_hole": false, 00:14:11.284 "seek_data": false, 00:14:11.284 "copy": true, 00:14:11.284 "nvme_iov_md": false 00:14:11.284 }, 00:14:11.284 "memory_domains": [ 00:14:11.284 { 00:14:11.284 "dma_device_id": "system", 00:14:11.284 "dma_device_type": 1 00:14:11.284 }, 00:14:11.284 { 00:14:11.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.284 "dma_device_type": 2 00:14:11.284 } 00:14:11.284 ], 00:14:11.284 "driver_specific": {} 00:14:11.284 } 00:14:11.284 ] 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.284 "name": "Existed_Raid", 00:14:11.284 "uuid": "bda13276-0f28-44ef-8f90-08b412a065c9", 00:14:11.284 "strip_size_kb": 64, 00:14:11.284 "state": "configuring", 00:14:11.284 "raid_level": "raid0", 00:14:11.284 "superblock": true, 00:14:11.284 "num_base_bdevs": 4, 00:14:11.284 "num_base_bdevs_discovered": 2, 00:14:11.284 "num_base_bdevs_operational": 4, 00:14:11.284 "base_bdevs_list": [ 00:14:11.284 { 00:14:11.284 "name": "BaseBdev1", 00:14:11.284 "uuid": "0bdc2266-c051-43e5-8d14-f82fdf514797", 00:14:11.284 "is_configured": true, 00:14:11.284 "data_offset": 2048, 00:14:11.284 "data_size": 63488 00:14:11.284 }, 00:14:11.284 { 00:14:11.284 "name": "BaseBdev2", 00:14:11.284 "uuid": "97167269-9fd0-4792-b814-798d900b175b", 00:14:11.284 "is_configured": true, 00:14:11.284 "data_offset": 2048, 00:14:11.284 "data_size": 63488 00:14:11.284 }, 00:14:11.284 { 00:14:11.284 "name": "BaseBdev3", 00:14:11.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.284 "is_configured": false, 00:14:11.284 "data_offset": 0, 00:14:11.284 "data_size": 0 00:14:11.284 }, 00:14:11.284 { 00:14:11.284 "name": "BaseBdev4", 00:14:11.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.284 "is_configured": false, 00:14:11.284 "data_offset": 0, 00:14:11.284 "data_size": 0 00:14:11.284 } 00:14:11.284 ] 00:14:11.284 }' 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.284 07:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.851 [2024-11-20 07:11:09.130068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:11.851 BaseBdev3 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.851 [ 00:14:11.851 { 00:14:11.851 "name": "BaseBdev3", 00:14:11.851 "aliases": [ 00:14:11.851 "283a0dac-21b6-49fa-acff-da2435bd4105" 00:14:11.851 ], 00:14:11.851 "product_name": "Malloc disk", 00:14:11.851 "block_size": 512, 00:14:11.851 "num_blocks": 65536, 00:14:11.851 "uuid": "283a0dac-21b6-49fa-acff-da2435bd4105", 00:14:11.851 "assigned_rate_limits": { 00:14:11.851 "rw_ios_per_sec": 0, 00:14:11.851 "rw_mbytes_per_sec": 0, 00:14:11.851 "r_mbytes_per_sec": 0, 00:14:11.851 "w_mbytes_per_sec": 0 00:14:11.851 }, 00:14:11.851 "claimed": true, 00:14:11.851 "claim_type": "exclusive_write", 00:14:11.851 "zoned": false, 00:14:11.851 "supported_io_types": { 00:14:11.851 "read": true, 00:14:11.851 "write": true, 00:14:11.851 "unmap": true, 00:14:11.851 "flush": true, 00:14:11.851 "reset": true, 00:14:11.851 "nvme_admin": false, 00:14:11.851 "nvme_io": false, 00:14:11.851 "nvme_io_md": false, 00:14:11.851 "write_zeroes": true, 00:14:11.851 "zcopy": true, 00:14:11.851 "get_zone_info": false, 00:14:11.851 "zone_management": false, 00:14:11.851 "zone_append": false, 00:14:11.851 "compare": false, 00:14:11.851 "compare_and_write": false, 00:14:11.851 "abort": true, 00:14:11.851 "seek_hole": false, 00:14:11.851 "seek_data": false, 00:14:11.851 "copy": true, 00:14:11.851 "nvme_iov_md": false 00:14:11.851 }, 00:14:11.851 "memory_domains": [ 00:14:11.851 { 00:14:11.851 "dma_device_id": "system", 00:14:11.851 "dma_device_type": 1 00:14:11.851 }, 00:14:11.851 { 00:14:11.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.851 "dma_device_type": 2 00:14:11.851 } 00:14:11.851 ], 00:14:11.851 "driver_specific": {} 00:14:11.851 } 00:14:11.851 ] 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.851 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.109 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.109 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.109 "name": "Existed_Raid", 00:14:12.109 "uuid": "bda13276-0f28-44ef-8f90-08b412a065c9", 00:14:12.109 "strip_size_kb": 64, 00:14:12.109 "state": "configuring", 00:14:12.109 "raid_level": "raid0", 00:14:12.109 "superblock": true, 00:14:12.109 "num_base_bdevs": 4, 00:14:12.109 "num_base_bdevs_discovered": 3, 00:14:12.109 "num_base_bdevs_operational": 4, 00:14:12.109 "base_bdevs_list": [ 00:14:12.109 { 00:14:12.109 "name": "BaseBdev1", 00:14:12.109 "uuid": "0bdc2266-c051-43e5-8d14-f82fdf514797", 00:14:12.109 "is_configured": true, 00:14:12.109 "data_offset": 2048, 00:14:12.109 "data_size": 63488 00:14:12.109 }, 00:14:12.109 { 00:14:12.109 "name": "BaseBdev2", 00:14:12.109 "uuid": "97167269-9fd0-4792-b814-798d900b175b", 00:14:12.109 "is_configured": true, 00:14:12.109 "data_offset": 2048, 00:14:12.109 "data_size": 63488 00:14:12.109 }, 00:14:12.109 { 00:14:12.109 "name": "BaseBdev3", 00:14:12.109 "uuid": "283a0dac-21b6-49fa-acff-da2435bd4105", 00:14:12.109 "is_configured": true, 00:14:12.109 "data_offset": 2048, 00:14:12.109 "data_size": 63488 00:14:12.109 }, 00:14:12.109 { 00:14:12.109 "name": "BaseBdev4", 00:14:12.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.109 "is_configured": false, 00:14:12.109 "data_offset": 0, 00:14:12.109 "data_size": 0 00:14:12.109 } 00:14:12.110 ] 00:14:12.110 }' 00:14:12.110 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.110 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.368 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:12.368 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.368 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.626 [2024-11-20 07:11:09.700347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.626 [2024-11-20 07:11:09.700740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:12.626 [2024-11-20 07:11:09.700772] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:12.626 BaseBdev4 00:14:12.626 [2024-11-20 07:11:09.701226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:12.626 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.626 [2024-11-20 07:11:09.701518] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:12.626 [2024-11-20 07:11:09.701552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:12.626 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:12.626 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:12.627 [2024-11-20 07:11:09.701822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.627 [ 00:14:12.627 { 00:14:12.627 "name": "BaseBdev4", 00:14:12.627 "aliases": [ 00:14:12.627 "85fa66e2-a3f1-4255-87ec-0e3d0a5cf314" 00:14:12.627 ], 00:14:12.627 "product_name": "Malloc disk", 00:14:12.627 "block_size": 512, 00:14:12.627 "num_blocks": 65536, 00:14:12.627 "uuid": "85fa66e2-a3f1-4255-87ec-0e3d0a5cf314", 00:14:12.627 "assigned_rate_limits": { 00:14:12.627 "rw_ios_per_sec": 0, 00:14:12.627 "rw_mbytes_per_sec": 0, 00:14:12.627 "r_mbytes_per_sec": 0, 00:14:12.627 "w_mbytes_per_sec": 0 00:14:12.627 }, 00:14:12.627 "claimed": true, 00:14:12.627 "claim_type": "exclusive_write", 00:14:12.627 "zoned": false, 00:14:12.627 "supported_io_types": { 00:14:12.627 "read": true, 00:14:12.627 "write": true, 00:14:12.627 "unmap": true, 00:14:12.627 "flush": true, 00:14:12.627 "reset": true, 00:14:12.627 "nvme_admin": false, 00:14:12.627 "nvme_io": false, 00:14:12.627 "nvme_io_md": false, 00:14:12.627 "write_zeroes": true, 00:14:12.627 "zcopy": true, 00:14:12.627 "get_zone_info": false, 00:14:12.627 "zone_management": false, 00:14:12.627 "zone_append": false, 00:14:12.627 "compare": false, 00:14:12.627 "compare_and_write": false, 00:14:12.627 "abort": true, 00:14:12.627 "seek_hole": false, 00:14:12.627 "seek_data": false, 00:14:12.627 "copy": true, 00:14:12.627 "nvme_iov_md": false 00:14:12.627 }, 00:14:12.627 "memory_domains": [ 00:14:12.627 { 00:14:12.627 "dma_device_id": "system", 00:14:12.627 "dma_device_type": 1 00:14:12.627 }, 00:14:12.627 { 00:14:12.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.627 "dma_device_type": 2 00:14:12.627 } 00:14:12.627 ], 00:14:12.627 "driver_specific": {} 00:14:12.627 } 00:14:12.627 ] 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.627 "name": "Existed_Raid", 00:14:12.627 "uuid": "bda13276-0f28-44ef-8f90-08b412a065c9", 00:14:12.627 "strip_size_kb": 64, 00:14:12.627 "state": "online", 00:14:12.627 "raid_level": "raid0", 00:14:12.627 "superblock": true, 00:14:12.627 "num_base_bdevs": 4, 00:14:12.627 "num_base_bdevs_discovered": 4, 00:14:12.627 "num_base_bdevs_operational": 4, 00:14:12.627 "base_bdevs_list": [ 00:14:12.627 { 00:14:12.627 "name": "BaseBdev1", 00:14:12.627 "uuid": "0bdc2266-c051-43e5-8d14-f82fdf514797", 00:14:12.627 "is_configured": true, 00:14:12.627 "data_offset": 2048, 00:14:12.627 "data_size": 63488 00:14:12.627 }, 00:14:12.627 { 00:14:12.627 "name": "BaseBdev2", 00:14:12.627 "uuid": "97167269-9fd0-4792-b814-798d900b175b", 00:14:12.627 "is_configured": true, 00:14:12.627 "data_offset": 2048, 00:14:12.627 "data_size": 63488 00:14:12.627 }, 00:14:12.627 { 00:14:12.627 "name": "BaseBdev3", 00:14:12.627 "uuid": "283a0dac-21b6-49fa-acff-da2435bd4105", 00:14:12.627 "is_configured": true, 00:14:12.627 "data_offset": 2048, 00:14:12.627 "data_size": 63488 00:14:12.627 }, 00:14:12.627 { 00:14:12.627 "name": "BaseBdev4", 00:14:12.627 "uuid": "85fa66e2-a3f1-4255-87ec-0e3d0a5cf314", 00:14:12.627 "is_configured": true, 00:14:12.627 "data_offset": 2048, 00:14:12.627 "data_size": 63488 00:14:12.627 } 00:14:12.627 ] 00:14:12.627 }' 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.627 07:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.194 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:13.194 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:13.194 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:13.194 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:13.194 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:13.194 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:13.194 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:13.194 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.194 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.194 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:13.194 [2024-11-20 07:11:10.229015] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:13.194 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.194 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:13.194 "name": "Existed_Raid", 00:14:13.194 "aliases": [ 00:14:13.194 "bda13276-0f28-44ef-8f90-08b412a065c9" 00:14:13.194 ], 00:14:13.194 "product_name": "Raid Volume", 00:14:13.195 "block_size": 512, 00:14:13.195 "num_blocks": 253952, 00:14:13.195 "uuid": "bda13276-0f28-44ef-8f90-08b412a065c9", 00:14:13.195 "assigned_rate_limits": { 00:14:13.195 "rw_ios_per_sec": 0, 00:14:13.195 "rw_mbytes_per_sec": 0, 00:14:13.195 "r_mbytes_per_sec": 0, 00:14:13.195 "w_mbytes_per_sec": 0 00:14:13.195 }, 00:14:13.195 "claimed": false, 00:14:13.195 "zoned": false, 00:14:13.195 "supported_io_types": { 00:14:13.195 "read": true, 00:14:13.195 "write": true, 00:14:13.195 "unmap": true, 00:14:13.195 "flush": true, 00:14:13.195 "reset": true, 00:14:13.195 "nvme_admin": false, 00:14:13.195 "nvme_io": false, 00:14:13.195 "nvme_io_md": false, 00:14:13.195 "write_zeroes": true, 00:14:13.195 "zcopy": false, 00:14:13.195 "get_zone_info": false, 00:14:13.195 "zone_management": false, 00:14:13.195 "zone_append": false, 00:14:13.195 "compare": false, 00:14:13.195 "compare_and_write": false, 00:14:13.195 "abort": false, 00:14:13.195 "seek_hole": false, 00:14:13.195 "seek_data": false, 00:14:13.195 "copy": false, 00:14:13.195 "nvme_iov_md": false 00:14:13.195 }, 00:14:13.195 "memory_domains": [ 00:14:13.195 { 00:14:13.195 "dma_device_id": "system", 00:14:13.195 "dma_device_type": 1 00:14:13.195 }, 00:14:13.195 { 00:14:13.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.195 "dma_device_type": 2 00:14:13.195 }, 00:14:13.195 { 00:14:13.195 "dma_device_id": "system", 00:14:13.195 "dma_device_type": 1 00:14:13.195 }, 00:14:13.195 { 00:14:13.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.195 "dma_device_type": 2 00:14:13.195 }, 00:14:13.195 { 00:14:13.195 "dma_device_id": "system", 00:14:13.195 "dma_device_type": 1 00:14:13.195 }, 00:14:13.195 { 00:14:13.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.195 "dma_device_type": 2 00:14:13.195 }, 00:14:13.195 { 00:14:13.195 "dma_device_id": "system", 00:14:13.195 "dma_device_type": 1 00:14:13.195 }, 00:14:13.195 { 00:14:13.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.195 "dma_device_type": 2 00:14:13.195 } 00:14:13.195 ], 00:14:13.195 "driver_specific": { 00:14:13.195 "raid": { 00:14:13.195 "uuid": "bda13276-0f28-44ef-8f90-08b412a065c9", 00:14:13.195 "strip_size_kb": 64, 00:14:13.195 "state": "online", 00:14:13.195 "raid_level": "raid0", 00:14:13.195 "superblock": true, 00:14:13.195 "num_base_bdevs": 4, 00:14:13.195 "num_base_bdevs_discovered": 4, 00:14:13.195 "num_base_bdevs_operational": 4, 00:14:13.195 "base_bdevs_list": [ 00:14:13.195 { 00:14:13.195 "name": "BaseBdev1", 00:14:13.195 "uuid": "0bdc2266-c051-43e5-8d14-f82fdf514797", 00:14:13.195 "is_configured": true, 00:14:13.195 "data_offset": 2048, 00:14:13.195 "data_size": 63488 00:14:13.195 }, 00:14:13.195 { 00:14:13.195 "name": "BaseBdev2", 00:14:13.195 "uuid": "97167269-9fd0-4792-b814-798d900b175b", 00:14:13.195 "is_configured": true, 00:14:13.195 "data_offset": 2048, 00:14:13.195 "data_size": 63488 00:14:13.195 }, 00:14:13.195 { 00:14:13.195 "name": "BaseBdev3", 00:14:13.195 "uuid": "283a0dac-21b6-49fa-acff-da2435bd4105", 00:14:13.195 "is_configured": true, 00:14:13.195 "data_offset": 2048, 00:14:13.195 "data_size": 63488 00:14:13.195 }, 00:14:13.195 { 00:14:13.195 "name": "BaseBdev4", 00:14:13.195 "uuid": "85fa66e2-a3f1-4255-87ec-0e3d0a5cf314", 00:14:13.195 "is_configured": true, 00:14:13.195 "data_offset": 2048, 00:14:13.195 "data_size": 63488 00:14:13.195 } 00:14:13.195 ] 00:14:13.195 } 00:14:13.195 } 00:14:13.195 }' 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:13.195 BaseBdev2 00:14:13.195 BaseBdev3 00:14:13.195 BaseBdev4' 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.195 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.196 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.196 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.196 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.196 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:13.196 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.196 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.196 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.454 [2024-11-20 07:11:10.600714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:13.454 [2024-11-20 07:11:10.600894] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.454 [2024-11-20 07:11:10.601072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.454 "name": "Existed_Raid", 00:14:13.454 "uuid": "bda13276-0f28-44ef-8f90-08b412a065c9", 00:14:13.454 "strip_size_kb": 64, 00:14:13.454 "state": "offline", 00:14:13.454 "raid_level": "raid0", 00:14:13.454 "superblock": true, 00:14:13.454 "num_base_bdevs": 4, 00:14:13.454 "num_base_bdevs_discovered": 3, 00:14:13.454 "num_base_bdevs_operational": 3, 00:14:13.454 "base_bdevs_list": [ 00:14:13.454 { 00:14:13.454 "name": null, 00:14:13.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.454 "is_configured": false, 00:14:13.454 "data_offset": 0, 00:14:13.454 "data_size": 63488 00:14:13.454 }, 00:14:13.454 { 00:14:13.454 "name": "BaseBdev2", 00:14:13.454 "uuid": "97167269-9fd0-4792-b814-798d900b175b", 00:14:13.454 "is_configured": true, 00:14:13.454 "data_offset": 2048, 00:14:13.454 "data_size": 63488 00:14:13.454 }, 00:14:13.454 { 00:14:13.454 "name": "BaseBdev3", 00:14:13.454 "uuid": "283a0dac-21b6-49fa-acff-da2435bd4105", 00:14:13.454 "is_configured": true, 00:14:13.454 "data_offset": 2048, 00:14:13.454 "data_size": 63488 00:14:13.454 }, 00:14:13.454 { 00:14:13.454 "name": "BaseBdev4", 00:14:13.454 "uuid": "85fa66e2-a3f1-4255-87ec-0e3d0a5cf314", 00:14:13.454 "is_configured": true, 00:14:13.454 "data_offset": 2048, 00:14:13.454 "data_size": 63488 00:14:13.454 } 00:14:13.454 ] 00:14:13.454 }' 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.454 07:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 [2024-11-20 07:11:11.261002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:14.072 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.331 [2024-11-20 07:11:11.404808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.331 [2024-11-20 07:11:11.553879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:14.331 [2024-11-20 07:11:11.554082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:14.331 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.590 BaseBdev2 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.590 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.590 [ 00:14:14.590 { 00:14:14.590 "name": "BaseBdev2", 00:14:14.590 "aliases": [ 00:14:14.590 "5f807588-7f46-4625-88bf-b5cf82a1edde" 00:14:14.590 ], 00:14:14.590 "product_name": "Malloc disk", 00:14:14.590 "block_size": 512, 00:14:14.590 "num_blocks": 65536, 00:14:14.590 "uuid": "5f807588-7f46-4625-88bf-b5cf82a1edde", 00:14:14.590 "assigned_rate_limits": { 00:14:14.590 "rw_ios_per_sec": 0, 00:14:14.590 "rw_mbytes_per_sec": 0, 00:14:14.590 "r_mbytes_per_sec": 0, 00:14:14.590 "w_mbytes_per_sec": 0 00:14:14.590 }, 00:14:14.590 "claimed": false, 00:14:14.590 "zoned": false, 00:14:14.590 "supported_io_types": { 00:14:14.590 "read": true, 00:14:14.590 "write": true, 00:14:14.590 "unmap": true, 00:14:14.590 "flush": true, 00:14:14.590 "reset": true, 00:14:14.590 "nvme_admin": false, 00:14:14.590 "nvme_io": false, 00:14:14.590 "nvme_io_md": false, 00:14:14.590 "write_zeroes": true, 00:14:14.590 "zcopy": true, 00:14:14.590 "get_zone_info": false, 00:14:14.590 "zone_management": false, 00:14:14.590 "zone_append": false, 00:14:14.590 "compare": false, 00:14:14.590 "compare_and_write": false, 00:14:14.590 "abort": true, 00:14:14.590 "seek_hole": false, 00:14:14.590 "seek_data": false, 00:14:14.590 "copy": true, 00:14:14.590 "nvme_iov_md": false 00:14:14.590 }, 00:14:14.590 "memory_domains": [ 00:14:14.590 { 00:14:14.590 "dma_device_id": "system", 00:14:14.590 "dma_device_type": 1 00:14:14.590 }, 00:14:14.590 { 00:14:14.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.590 "dma_device_type": 2 00:14:14.590 } 00:14:14.590 ], 00:14:14.590 "driver_specific": {} 00:14:14.591 } 00:14:14.591 ] 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.591 BaseBdev3 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.591 [ 00:14:14.591 { 00:14:14.591 "name": "BaseBdev3", 00:14:14.591 "aliases": [ 00:14:14.591 "2e33ae15-9707-4229-9287-37eb90b066ed" 00:14:14.591 ], 00:14:14.591 "product_name": "Malloc disk", 00:14:14.591 "block_size": 512, 00:14:14.591 "num_blocks": 65536, 00:14:14.591 "uuid": "2e33ae15-9707-4229-9287-37eb90b066ed", 00:14:14.591 "assigned_rate_limits": { 00:14:14.591 "rw_ios_per_sec": 0, 00:14:14.591 "rw_mbytes_per_sec": 0, 00:14:14.591 "r_mbytes_per_sec": 0, 00:14:14.591 "w_mbytes_per_sec": 0 00:14:14.591 }, 00:14:14.591 "claimed": false, 00:14:14.591 "zoned": false, 00:14:14.591 "supported_io_types": { 00:14:14.591 "read": true, 00:14:14.591 "write": true, 00:14:14.591 "unmap": true, 00:14:14.591 "flush": true, 00:14:14.591 "reset": true, 00:14:14.591 "nvme_admin": false, 00:14:14.591 "nvme_io": false, 00:14:14.591 "nvme_io_md": false, 00:14:14.591 "write_zeroes": true, 00:14:14.591 "zcopy": true, 00:14:14.591 "get_zone_info": false, 00:14:14.591 "zone_management": false, 00:14:14.591 "zone_append": false, 00:14:14.591 "compare": false, 00:14:14.591 "compare_and_write": false, 00:14:14.591 "abort": true, 00:14:14.591 "seek_hole": false, 00:14:14.591 "seek_data": false, 00:14:14.591 "copy": true, 00:14:14.591 "nvme_iov_md": false 00:14:14.591 }, 00:14:14.591 "memory_domains": [ 00:14:14.591 { 00:14:14.591 "dma_device_id": "system", 00:14:14.591 "dma_device_type": 1 00:14:14.591 }, 00:14:14.591 { 00:14:14.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.591 "dma_device_type": 2 00:14:14.591 } 00:14:14.591 ], 00:14:14.591 "driver_specific": {} 00:14:14.591 } 00:14:14.591 ] 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.591 BaseBdev4 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.591 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.591 [ 00:14:14.591 { 00:14:14.591 "name": "BaseBdev4", 00:14:14.591 "aliases": [ 00:14:14.591 "e7f0ed6c-fec8-4e81-8cd7-8cb134691fa8" 00:14:14.591 ], 00:14:14.591 "product_name": "Malloc disk", 00:14:14.591 "block_size": 512, 00:14:14.591 "num_blocks": 65536, 00:14:14.591 "uuid": "e7f0ed6c-fec8-4e81-8cd7-8cb134691fa8", 00:14:14.591 "assigned_rate_limits": { 00:14:14.591 "rw_ios_per_sec": 0, 00:14:14.591 "rw_mbytes_per_sec": 0, 00:14:14.591 "r_mbytes_per_sec": 0, 00:14:14.591 "w_mbytes_per_sec": 0 00:14:14.591 }, 00:14:14.591 "claimed": false, 00:14:14.591 "zoned": false, 00:14:14.591 "supported_io_types": { 00:14:14.591 "read": true, 00:14:14.591 "write": true, 00:14:14.591 "unmap": true, 00:14:14.591 "flush": true, 00:14:14.591 "reset": true, 00:14:14.591 "nvme_admin": false, 00:14:14.591 "nvme_io": false, 00:14:14.591 "nvme_io_md": false, 00:14:14.591 "write_zeroes": true, 00:14:14.591 "zcopy": true, 00:14:14.591 "get_zone_info": false, 00:14:14.591 "zone_management": false, 00:14:14.850 "zone_append": false, 00:14:14.850 "compare": false, 00:14:14.850 "compare_and_write": false, 00:14:14.850 "abort": true, 00:14:14.850 "seek_hole": false, 00:14:14.850 "seek_data": false, 00:14:14.850 "copy": true, 00:14:14.850 "nvme_iov_md": false 00:14:14.850 }, 00:14:14.850 "memory_domains": [ 00:14:14.850 { 00:14:14.850 "dma_device_id": "system", 00:14:14.850 "dma_device_type": 1 00:14:14.850 }, 00:14:14.850 { 00:14:14.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.850 "dma_device_type": 2 00:14:14.850 } 00:14:14.850 ], 00:14:14.850 "driver_specific": {} 00:14:14.850 } 00:14:14.850 ] 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.850 [2024-11-20 07:11:11.921706] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:14.850 [2024-11-20 07:11:11.921904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:14.850 [2024-11-20 07:11:11.922051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:14.850 [2024-11-20 07:11:11.924551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.850 [2024-11-20 07:11:11.924625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.850 "name": "Existed_Raid", 00:14:14.850 "uuid": "6ca6d02e-2729-4d07-8bd4-7df095b9b250", 00:14:14.850 "strip_size_kb": 64, 00:14:14.850 "state": "configuring", 00:14:14.850 "raid_level": "raid0", 00:14:14.850 "superblock": true, 00:14:14.850 "num_base_bdevs": 4, 00:14:14.850 "num_base_bdevs_discovered": 3, 00:14:14.850 "num_base_bdevs_operational": 4, 00:14:14.850 "base_bdevs_list": [ 00:14:14.850 { 00:14:14.850 "name": "BaseBdev1", 00:14:14.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.850 "is_configured": false, 00:14:14.850 "data_offset": 0, 00:14:14.850 "data_size": 0 00:14:14.850 }, 00:14:14.850 { 00:14:14.850 "name": "BaseBdev2", 00:14:14.850 "uuid": "5f807588-7f46-4625-88bf-b5cf82a1edde", 00:14:14.850 "is_configured": true, 00:14:14.850 "data_offset": 2048, 00:14:14.850 "data_size": 63488 00:14:14.850 }, 00:14:14.850 { 00:14:14.850 "name": "BaseBdev3", 00:14:14.850 "uuid": "2e33ae15-9707-4229-9287-37eb90b066ed", 00:14:14.850 "is_configured": true, 00:14:14.850 "data_offset": 2048, 00:14:14.850 "data_size": 63488 00:14:14.850 }, 00:14:14.850 { 00:14:14.850 "name": "BaseBdev4", 00:14:14.850 "uuid": "e7f0ed6c-fec8-4e81-8cd7-8cb134691fa8", 00:14:14.850 "is_configured": true, 00:14:14.850 "data_offset": 2048, 00:14:14.850 "data_size": 63488 00:14:14.850 } 00:14:14.850 ] 00:14:14.850 }' 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.850 07:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.417 [2024-11-20 07:11:12.473858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.417 "name": "Existed_Raid", 00:14:15.417 "uuid": "6ca6d02e-2729-4d07-8bd4-7df095b9b250", 00:14:15.417 "strip_size_kb": 64, 00:14:15.417 "state": "configuring", 00:14:15.417 "raid_level": "raid0", 00:14:15.417 "superblock": true, 00:14:15.417 "num_base_bdevs": 4, 00:14:15.417 "num_base_bdevs_discovered": 2, 00:14:15.417 "num_base_bdevs_operational": 4, 00:14:15.417 "base_bdevs_list": [ 00:14:15.417 { 00:14:15.417 "name": "BaseBdev1", 00:14:15.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.417 "is_configured": false, 00:14:15.417 "data_offset": 0, 00:14:15.417 "data_size": 0 00:14:15.417 }, 00:14:15.417 { 00:14:15.417 "name": null, 00:14:15.417 "uuid": "5f807588-7f46-4625-88bf-b5cf82a1edde", 00:14:15.417 "is_configured": false, 00:14:15.417 "data_offset": 0, 00:14:15.417 "data_size": 63488 00:14:15.417 }, 00:14:15.417 { 00:14:15.417 "name": "BaseBdev3", 00:14:15.417 "uuid": "2e33ae15-9707-4229-9287-37eb90b066ed", 00:14:15.417 "is_configured": true, 00:14:15.417 "data_offset": 2048, 00:14:15.417 "data_size": 63488 00:14:15.417 }, 00:14:15.417 { 00:14:15.417 "name": "BaseBdev4", 00:14:15.417 "uuid": "e7f0ed6c-fec8-4e81-8cd7-8cb134691fa8", 00:14:15.417 "is_configured": true, 00:14:15.417 "data_offset": 2048, 00:14:15.417 "data_size": 63488 00:14:15.417 } 00:14:15.417 ] 00:14:15.417 }' 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.417 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.675 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.675 07:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:15.675 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.675 07:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.934 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.934 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.935 [2024-11-20 07:11:13.067966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:15.935 BaseBdev1 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.935 [ 00:14:15.935 { 00:14:15.935 "name": "BaseBdev1", 00:14:15.935 "aliases": [ 00:14:15.935 "943b28e4-c1e8-42be-8a6e-2cc3f7362d70" 00:14:15.935 ], 00:14:15.935 "product_name": "Malloc disk", 00:14:15.935 "block_size": 512, 00:14:15.935 "num_blocks": 65536, 00:14:15.935 "uuid": "943b28e4-c1e8-42be-8a6e-2cc3f7362d70", 00:14:15.935 "assigned_rate_limits": { 00:14:15.935 "rw_ios_per_sec": 0, 00:14:15.935 "rw_mbytes_per_sec": 0, 00:14:15.935 "r_mbytes_per_sec": 0, 00:14:15.935 "w_mbytes_per_sec": 0 00:14:15.935 }, 00:14:15.935 "claimed": true, 00:14:15.935 "claim_type": "exclusive_write", 00:14:15.935 "zoned": false, 00:14:15.935 "supported_io_types": { 00:14:15.935 "read": true, 00:14:15.935 "write": true, 00:14:15.935 "unmap": true, 00:14:15.935 "flush": true, 00:14:15.935 "reset": true, 00:14:15.935 "nvme_admin": false, 00:14:15.935 "nvme_io": false, 00:14:15.935 "nvme_io_md": false, 00:14:15.935 "write_zeroes": true, 00:14:15.935 "zcopy": true, 00:14:15.935 "get_zone_info": false, 00:14:15.935 "zone_management": false, 00:14:15.935 "zone_append": false, 00:14:15.935 "compare": false, 00:14:15.935 "compare_and_write": false, 00:14:15.935 "abort": true, 00:14:15.935 "seek_hole": false, 00:14:15.935 "seek_data": false, 00:14:15.935 "copy": true, 00:14:15.935 "nvme_iov_md": false 00:14:15.935 }, 00:14:15.935 "memory_domains": [ 00:14:15.935 { 00:14:15.935 "dma_device_id": "system", 00:14:15.935 "dma_device_type": 1 00:14:15.935 }, 00:14:15.935 { 00:14:15.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.935 "dma_device_type": 2 00:14:15.935 } 00:14:15.935 ], 00:14:15.935 "driver_specific": {} 00:14:15.935 } 00:14:15.935 ] 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.935 "name": "Existed_Raid", 00:14:15.935 "uuid": "6ca6d02e-2729-4d07-8bd4-7df095b9b250", 00:14:15.935 "strip_size_kb": 64, 00:14:15.935 "state": "configuring", 00:14:15.935 "raid_level": "raid0", 00:14:15.935 "superblock": true, 00:14:15.935 "num_base_bdevs": 4, 00:14:15.935 "num_base_bdevs_discovered": 3, 00:14:15.935 "num_base_bdevs_operational": 4, 00:14:15.935 "base_bdevs_list": [ 00:14:15.935 { 00:14:15.935 "name": "BaseBdev1", 00:14:15.935 "uuid": "943b28e4-c1e8-42be-8a6e-2cc3f7362d70", 00:14:15.935 "is_configured": true, 00:14:15.935 "data_offset": 2048, 00:14:15.935 "data_size": 63488 00:14:15.935 }, 00:14:15.935 { 00:14:15.935 "name": null, 00:14:15.935 "uuid": "5f807588-7f46-4625-88bf-b5cf82a1edde", 00:14:15.935 "is_configured": false, 00:14:15.935 "data_offset": 0, 00:14:15.935 "data_size": 63488 00:14:15.935 }, 00:14:15.935 { 00:14:15.935 "name": "BaseBdev3", 00:14:15.935 "uuid": "2e33ae15-9707-4229-9287-37eb90b066ed", 00:14:15.935 "is_configured": true, 00:14:15.935 "data_offset": 2048, 00:14:15.935 "data_size": 63488 00:14:15.935 }, 00:14:15.935 { 00:14:15.935 "name": "BaseBdev4", 00:14:15.935 "uuid": "e7f0ed6c-fec8-4e81-8cd7-8cb134691fa8", 00:14:15.935 "is_configured": true, 00:14:15.935 "data_offset": 2048, 00:14:15.935 "data_size": 63488 00:14:15.935 } 00:14:15.935 ] 00:14:15.935 }' 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.935 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.503 [2024-11-20 07:11:13.684231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.503 "name": "Existed_Raid", 00:14:16.503 "uuid": "6ca6d02e-2729-4d07-8bd4-7df095b9b250", 00:14:16.503 "strip_size_kb": 64, 00:14:16.503 "state": "configuring", 00:14:16.503 "raid_level": "raid0", 00:14:16.503 "superblock": true, 00:14:16.503 "num_base_bdevs": 4, 00:14:16.503 "num_base_bdevs_discovered": 2, 00:14:16.503 "num_base_bdevs_operational": 4, 00:14:16.503 "base_bdevs_list": [ 00:14:16.503 { 00:14:16.503 "name": "BaseBdev1", 00:14:16.503 "uuid": "943b28e4-c1e8-42be-8a6e-2cc3f7362d70", 00:14:16.503 "is_configured": true, 00:14:16.503 "data_offset": 2048, 00:14:16.503 "data_size": 63488 00:14:16.503 }, 00:14:16.503 { 00:14:16.503 "name": null, 00:14:16.503 "uuid": "5f807588-7f46-4625-88bf-b5cf82a1edde", 00:14:16.503 "is_configured": false, 00:14:16.503 "data_offset": 0, 00:14:16.503 "data_size": 63488 00:14:16.503 }, 00:14:16.503 { 00:14:16.503 "name": null, 00:14:16.503 "uuid": "2e33ae15-9707-4229-9287-37eb90b066ed", 00:14:16.503 "is_configured": false, 00:14:16.503 "data_offset": 0, 00:14:16.503 "data_size": 63488 00:14:16.503 }, 00:14:16.503 { 00:14:16.503 "name": "BaseBdev4", 00:14:16.503 "uuid": "e7f0ed6c-fec8-4e81-8cd7-8cb134691fa8", 00:14:16.503 "is_configured": true, 00:14:16.503 "data_offset": 2048, 00:14:16.503 "data_size": 63488 00:14:16.503 } 00:14:16.503 ] 00:14:16.503 }' 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.503 07:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.071 [2024-11-20 07:11:14.252366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.071 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.071 "name": "Existed_Raid", 00:14:17.071 "uuid": "6ca6d02e-2729-4d07-8bd4-7df095b9b250", 00:14:17.071 "strip_size_kb": 64, 00:14:17.071 "state": "configuring", 00:14:17.071 "raid_level": "raid0", 00:14:17.071 "superblock": true, 00:14:17.071 "num_base_bdevs": 4, 00:14:17.071 "num_base_bdevs_discovered": 3, 00:14:17.071 "num_base_bdevs_operational": 4, 00:14:17.071 "base_bdevs_list": [ 00:14:17.071 { 00:14:17.071 "name": "BaseBdev1", 00:14:17.071 "uuid": "943b28e4-c1e8-42be-8a6e-2cc3f7362d70", 00:14:17.071 "is_configured": true, 00:14:17.071 "data_offset": 2048, 00:14:17.072 "data_size": 63488 00:14:17.072 }, 00:14:17.072 { 00:14:17.072 "name": null, 00:14:17.072 "uuid": "5f807588-7f46-4625-88bf-b5cf82a1edde", 00:14:17.072 "is_configured": false, 00:14:17.072 "data_offset": 0, 00:14:17.072 "data_size": 63488 00:14:17.072 }, 00:14:17.072 { 00:14:17.072 "name": "BaseBdev3", 00:14:17.072 "uuid": "2e33ae15-9707-4229-9287-37eb90b066ed", 00:14:17.072 "is_configured": true, 00:14:17.072 "data_offset": 2048, 00:14:17.072 "data_size": 63488 00:14:17.072 }, 00:14:17.072 { 00:14:17.072 "name": "BaseBdev4", 00:14:17.072 "uuid": "e7f0ed6c-fec8-4e81-8cd7-8cb134691fa8", 00:14:17.072 "is_configured": true, 00:14:17.072 "data_offset": 2048, 00:14:17.072 "data_size": 63488 00:14:17.072 } 00:14:17.072 ] 00:14:17.072 }' 00:14:17.072 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.072 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.759 [2024-11-20 07:11:14.828562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.759 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.760 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.760 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.760 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.760 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.760 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.760 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.760 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.760 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.760 "name": "Existed_Raid", 00:14:17.760 "uuid": "6ca6d02e-2729-4d07-8bd4-7df095b9b250", 00:14:17.760 "strip_size_kb": 64, 00:14:17.760 "state": "configuring", 00:14:17.760 "raid_level": "raid0", 00:14:17.760 "superblock": true, 00:14:17.760 "num_base_bdevs": 4, 00:14:17.760 "num_base_bdevs_discovered": 2, 00:14:17.760 "num_base_bdevs_operational": 4, 00:14:17.760 "base_bdevs_list": [ 00:14:17.760 { 00:14:17.760 "name": null, 00:14:17.760 "uuid": "943b28e4-c1e8-42be-8a6e-2cc3f7362d70", 00:14:17.760 "is_configured": false, 00:14:17.760 "data_offset": 0, 00:14:17.760 "data_size": 63488 00:14:17.760 }, 00:14:17.760 { 00:14:17.760 "name": null, 00:14:17.760 "uuid": "5f807588-7f46-4625-88bf-b5cf82a1edde", 00:14:17.760 "is_configured": false, 00:14:17.760 "data_offset": 0, 00:14:17.760 "data_size": 63488 00:14:17.760 }, 00:14:17.760 { 00:14:17.760 "name": "BaseBdev3", 00:14:17.760 "uuid": "2e33ae15-9707-4229-9287-37eb90b066ed", 00:14:17.760 "is_configured": true, 00:14:17.760 "data_offset": 2048, 00:14:17.760 "data_size": 63488 00:14:17.760 }, 00:14:17.760 { 00:14:17.760 "name": "BaseBdev4", 00:14:17.760 "uuid": "e7f0ed6c-fec8-4e81-8cd7-8cb134691fa8", 00:14:17.760 "is_configured": true, 00:14:17.760 "data_offset": 2048, 00:14:17.760 "data_size": 63488 00:14:17.760 } 00:14:17.760 ] 00:14:17.760 }' 00:14:17.760 07:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.760 07:11:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.330 [2024-11-20 07:11:15.459432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.330 "name": "Existed_Raid", 00:14:18.330 "uuid": "6ca6d02e-2729-4d07-8bd4-7df095b9b250", 00:14:18.330 "strip_size_kb": 64, 00:14:18.330 "state": "configuring", 00:14:18.330 "raid_level": "raid0", 00:14:18.330 "superblock": true, 00:14:18.330 "num_base_bdevs": 4, 00:14:18.330 "num_base_bdevs_discovered": 3, 00:14:18.330 "num_base_bdevs_operational": 4, 00:14:18.330 "base_bdevs_list": [ 00:14:18.330 { 00:14:18.330 "name": null, 00:14:18.330 "uuid": "943b28e4-c1e8-42be-8a6e-2cc3f7362d70", 00:14:18.330 "is_configured": false, 00:14:18.330 "data_offset": 0, 00:14:18.330 "data_size": 63488 00:14:18.330 }, 00:14:18.330 { 00:14:18.330 "name": "BaseBdev2", 00:14:18.330 "uuid": "5f807588-7f46-4625-88bf-b5cf82a1edde", 00:14:18.330 "is_configured": true, 00:14:18.330 "data_offset": 2048, 00:14:18.330 "data_size": 63488 00:14:18.330 }, 00:14:18.330 { 00:14:18.330 "name": "BaseBdev3", 00:14:18.330 "uuid": "2e33ae15-9707-4229-9287-37eb90b066ed", 00:14:18.330 "is_configured": true, 00:14:18.330 "data_offset": 2048, 00:14:18.330 "data_size": 63488 00:14:18.330 }, 00:14:18.330 { 00:14:18.330 "name": "BaseBdev4", 00:14:18.330 "uuid": "e7f0ed6c-fec8-4e81-8cd7-8cb134691fa8", 00:14:18.330 "is_configured": true, 00:14:18.330 "data_offset": 2048, 00:14:18.330 "data_size": 63488 00:14:18.330 } 00:14:18.330 ] 00:14:18.330 }' 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.330 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.898 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.898 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.898 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.898 07:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:18.898 07:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.898 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:18.898 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.898 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.898 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.898 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:18.898 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.898 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 943b28e4-c1e8-42be-8a6e-2cc3f7362d70 00:14:18.898 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.898 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.898 [2024-11-20 07:11:16.109603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:18.898 NewBaseBdev 00:14:18.898 [2024-11-20 07:11:16.110230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:18.899 [2024-11-20 07:11:16.110255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:18.899 [2024-11-20 07:11:16.110574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:18.899 [2024-11-20 07:11:16.110753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:18.899 [2024-11-20 07:11:16.110775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:18.899 [2024-11-20 07:11:16.110949] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.899 [ 00:14:18.899 { 00:14:18.899 "name": "NewBaseBdev", 00:14:18.899 "aliases": [ 00:14:18.899 "943b28e4-c1e8-42be-8a6e-2cc3f7362d70" 00:14:18.899 ], 00:14:18.899 "product_name": "Malloc disk", 00:14:18.899 "block_size": 512, 00:14:18.899 "num_blocks": 65536, 00:14:18.899 "uuid": "943b28e4-c1e8-42be-8a6e-2cc3f7362d70", 00:14:18.899 "assigned_rate_limits": { 00:14:18.899 "rw_ios_per_sec": 0, 00:14:18.899 "rw_mbytes_per_sec": 0, 00:14:18.899 "r_mbytes_per_sec": 0, 00:14:18.899 "w_mbytes_per_sec": 0 00:14:18.899 }, 00:14:18.899 "claimed": true, 00:14:18.899 "claim_type": "exclusive_write", 00:14:18.899 "zoned": false, 00:14:18.899 "supported_io_types": { 00:14:18.899 "read": true, 00:14:18.899 "write": true, 00:14:18.899 "unmap": true, 00:14:18.899 "flush": true, 00:14:18.899 "reset": true, 00:14:18.899 "nvme_admin": false, 00:14:18.899 "nvme_io": false, 00:14:18.899 "nvme_io_md": false, 00:14:18.899 "write_zeroes": true, 00:14:18.899 "zcopy": true, 00:14:18.899 "get_zone_info": false, 00:14:18.899 "zone_management": false, 00:14:18.899 "zone_append": false, 00:14:18.899 "compare": false, 00:14:18.899 "compare_and_write": false, 00:14:18.899 "abort": true, 00:14:18.899 "seek_hole": false, 00:14:18.899 "seek_data": false, 00:14:18.899 "copy": true, 00:14:18.899 "nvme_iov_md": false 00:14:18.899 }, 00:14:18.899 "memory_domains": [ 00:14:18.899 { 00:14:18.899 "dma_device_id": "system", 00:14:18.899 "dma_device_type": 1 00:14:18.899 }, 00:14:18.899 { 00:14:18.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.899 "dma_device_type": 2 00:14:18.899 } 00:14:18.899 ], 00:14:18.899 "driver_specific": {} 00:14:18.899 } 00:14:18.899 ] 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.899 "name": "Existed_Raid", 00:14:18.899 "uuid": "6ca6d02e-2729-4d07-8bd4-7df095b9b250", 00:14:18.899 "strip_size_kb": 64, 00:14:18.899 "state": "online", 00:14:18.899 "raid_level": "raid0", 00:14:18.899 "superblock": true, 00:14:18.899 "num_base_bdevs": 4, 00:14:18.899 "num_base_bdevs_discovered": 4, 00:14:18.899 "num_base_bdevs_operational": 4, 00:14:18.899 "base_bdevs_list": [ 00:14:18.899 { 00:14:18.899 "name": "NewBaseBdev", 00:14:18.899 "uuid": "943b28e4-c1e8-42be-8a6e-2cc3f7362d70", 00:14:18.899 "is_configured": true, 00:14:18.899 "data_offset": 2048, 00:14:18.899 "data_size": 63488 00:14:18.899 }, 00:14:18.899 { 00:14:18.899 "name": "BaseBdev2", 00:14:18.899 "uuid": "5f807588-7f46-4625-88bf-b5cf82a1edde", 00:14:18.899 "is_configured": true, 00:14:18.899 "data_offset": 2048, 00:14:18.899 "data_size": 63488 00:14:18.899 }, 00:14:18.899 { 00:14:18.899 "name": "BaseBdev3", 00:14:18.899 "uuid": "2e33ae15-9707-4229-9287-37eb90b066ed", 00:14:18.899 "is_configured": true, 00:14:18.899 "data_offset": 2048, 00:14:18.899 "data_size": 63488 00:14:18.899 }, 00:14:18.899 { 00:14:18.899 "name": "BaseBdev4", 00:14:18.899 "uuid": "e7f0ed6c-fec8-4e81-8cd7-8cb134691fa8", 00:14:18.899 "is_configured": true, 00:14:18.899 "data_offset": 2048, 00:14:18.899 "data_size": 63488 00:14:18.899 } 00:14:18.899 ] 00:14:18.899 }' 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.899 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.466 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:19.466 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:19.466 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:19.466 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:19.466 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:19.466 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:19.466 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:19.466 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:19.466 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.466 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.466 [2024-11-20 07:11:16.646264] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.466 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.466 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:19.466 "name": "Existed_Raid", 00:14:19.466 "aliases": [ 00:14:19.466 "6ca6d02e-2729-4d07-8bd4-7df095b9b250" 00:14:19.466 ], 00:14:19.466 "product_name": "Raid Volume", 00:14:19.466 "block_size": 512, 00:14:19.466 "num_blocks": 253952, 00:14:19.466 "uuid": "6ca6d02e-2729-4d07-8bd4-7df095b9b250", 00:14:19.466 "assigned_rate_limits": { 00:14:19.466 "rw_ios_per_sec": 0, 00:14:19.466 "rw_mbytes_per_sec": 0, 00:14:19.466 "r_mbytes_per_sec": 0, 00:14:19.466 "w_mbytes_per_sec": 0 00:14:19.466 }, 00:14:19.466 "claimed": false, 00:14:19.466 "zoned": false, 00:14:19.466 "supported_io_types": { 00:14:19.466 "read": true, 00:14:19.466 "write": true, 00:14:19.467 "unmap": true, 00:14:19.467 "flush": true, 00:14:19.467 "reset": true, 00:14:19.467 "nvme_admin": false, 00:14:19.467 "nvme_io": false, 00:14:19.467 "nvme_io_md": false, 00:14:19.467 "write_zeroes": true, 00:14:19.467 "zcopy": false, 00:14:19.467 "get_zone_info": false, 00:14:19.467 "zone_management": false, 00:14:19.467 "zone_append": false, 00:14:19.467 "compare": false, 00:14:19.467 "compare_and_write": false, 00:14:19.467 "abort": false, 00:14:19.467 "seek_hole": false, 00:14:19.467 "seek_data": false, 00:14:19.467 "copy": false, 00:14:19.467 "nvme_iov_md": false 00:14:19.467 }, 00:14:19.467 "memory_domains": [ 00:14:19.467 { 00:14:19.467 "dma_device_id": "system", 00:14:19.467 "dma_device_type": 1 00:14:19.467 }, 00:14:19.467 { 00:14:19.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.467 "dma_device_type": 2 00:14:19.467 }, 00:14:19.467 { 00:14:19.467 "dma_device_id": "system", 00:14:19.467 "dma_device_type": 1 00:14:19.467 }, 00:14:19.467 { 00:14:19.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.467 "dma_device_type": 2 00:14:19.467 }, 00:14:19.467 { 00:14:19.467 "dma_device_id": "system", 00:14:19.467 "dma_device_type": 1 00:14:19.467 }, 00:14:19.467 { 00:14:19.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.467 "dma_device_type": 2 00:14:19.467 }, 00:14:19.467 { 00:14:19.467 "dma_device_id": "system", 00:14:19.467 "dma_device_type": 1 00:14:19.467 }, 00:14:19.467 { 00:14:19.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.467 "dma_device_type": 2 00:14:19.467 } 00:14:19.467 ], 00:14:19.467 "driver_specific": { 00:14:19.467 "raid": { 00:14:19.467 "uuid": "6ca6d02e-2729-4d07-8bd4-7df095b9b250", 00:14:19.467 "strip_size_kb": 64, 00:14:19.467 "state": "online", 00:14:19.467 "raid_level": "raid0", 00:14:19.467 "superblock": true, 00:14:19.467 "num_base_bdevs": 4, 00:14:19.467 "num_base_bdevs_discovered": 4, 00:14:19.467 "num_base_bdevs_operational": 4, 00:14:19.467 "base_bdevs_list": [ 00:14:19.467 { 00:14:19.467 "name": "NewBaseBdev", 00:14:19.467 "uuid": "943b28e4-c1e8-42be-8a6e-2cc3f7362d70", 00:14:19.467 "is_configured": true, 00:14:19.467 "data_offset": 2048, 00:14:19.467 "data_size": 63488 00:14:19.467 }, 00:14:19.467 { 00:14:19.467 "name": "BaseBdev2", 00:14:19.467 "uuid": "5f807588-7f46-4625-88bf-b5cf82a1edde", 00:14:19.467 "is_configured": true, 00:14:19.467 "data_offset": 2048, 00:14:19.467 "data_size": 63488 00:14:19.467 }, 00:14:19.467 { 00:14:19.467 "name": "BaseBdev3", 00:14:19.467 "uuid": "2e33ae15-9707-4229-9287-37eb90b066ed", 00:14:19.467 "is_configured": true, 00:14:19.467 "data_offset": 2048, 00:14:19.467 "data_size": 63488 00:14:19.467 }, 00:14:19.467 { 00:14:19.467 "name": "BaseBdev4", 00:14:19.467 "uuid": "e7f0ed6c-fec8-4e81-8cd7-8cb134691fa8", 00:14:19.467 "is_configured": true, 00:14:19.467 "data_offset": 2048, 00:14:19.467 "data_size": 63488 00:14:19.467 } 00:14:19.467 ] 00:14:19.467 } 00:14:19.467 } 00:14:19.467 }' 00:14:19.467 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:19.467 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:19.467 BaseBdev2 00:14:19.467 BaseBdev3 00:14:19.467 BaseBdev4' 00:14:19.467 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.726 07:11:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.726 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:19.726 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:19.726 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.726 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.726 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.726 [2024-11-20 07:11:17.017945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.726 [2024-11-20 07:11:17.018102] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:19.726 [2024-11-20 07:11:17.018224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.726 [2024-11-20 07:11:17.018317] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.726 [2024-11-20 07:11:17.018334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:19.726 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.726 07:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70073 00:14:19.726 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70073 ']' 00:14:19.726 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70073 00:14:19.726 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:19.726 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.726 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70073 00:14:19.984 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:19.984 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:19.984 killing process with pid 70073 00:14:19.984 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70073' 00:14:19.985 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70073 00:14:19.985 [2024-11-20 07:11:17.057182] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.985 07:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70073 00:14:20.244 [2024-11-20 07:11:17.411467] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.180 07:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:21.180 00:14:21.180 real 0m12.746s 00:14:21.180 user 0m21.274s 00:14:21.180 sys 0m1.646s 00:14:21.180 07:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.180 ************************************ 00:14:21.180 END TEST raid_state_function_test_sb 00:14:21.180 ************************************ 00:14:21.180 07:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.180 07:11:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:14:21.180 07:11:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:21.180 07:11:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.180 07:11:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:21.180 ************************************ 00:14:21.180 START TEST raid_superblock_test 00:14:21.180 ************************************ 00:14:21.180 07:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:14:21.180 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:14:21.180 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:14:21.180 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:21.180 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:21.180 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:21.180 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:21.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70755 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70755 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70755 ']' 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.181 07:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.447 [2024-11-20 07:11:18.573742] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:14:21.447 [2024-11-20 07:11:18.573922] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70755 ] 00:14:21.447 [2024-11-20 07:11:18.748516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.737 [2024-11-20 07:11:18.898204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.998 [2024-11-20 07:11:19.113644] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.998 [2024-11-20 07:11:19.113730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.257 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.257 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:22.257 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:22.257 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:22.257 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:22.257 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:22.257 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:22.257 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:22.257 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:22.257 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:22.257 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:22.257 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.257 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.516 malloc1 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.516 [2024-11-20 07:11:19.605449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:22.516 [2024-11-20 07:11:19.605687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.516 [2024-11-20 07:11:19.605768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:22.516 [2024-11-20 07:11:19.605966] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.516 [2024-11-20 07:11:19.608833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.516 [2024-11-20 07:11:19.609022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:22.516 pt1 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.516 malloc2 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.516 [2024-11-20 07:11:19.657281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:22.516 [2024-11-20 07:11:19.657501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.516 [2024-11-20 07:11:19.657578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:22.516 [2024-11-20 07:11:19.657694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.516 [2024-11-20 07:11:19.660567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.516 [2024-11-20 07:11:19.660745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:22.516 pt2 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.516 malloc3 00:14:22.516 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.517 [2024-11-20 07:11:19.725806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:22.517 [2024-11-20 07:11:19.725896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.517 [2024-11-20 07:11:19.725933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:22.517 [2024-11-20 07:11:19.725957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.517 [2024-11-20 07:11:19.728738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.517 [2024-11-20 07:11:19.728800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:22.517 pt3 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.517 malloc4 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.517 [2024-11-20 07:11:19.781489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:22.517 [2024-11-20 07:11:19.781558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.517 [2024-11-20 07:11:19.781589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:22.517 [2024-11-20 07:11:19.781603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.517 [2024-11-20 07:11:19.784400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.517 [2024-11-20 07:11:19.784446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:22.517 pt4 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.517 [2024-11-20 07:11:19.789528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:22.517 [2024-11-20 07:11:19.792068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:22.517 [2024-11-20 07:11:19.792296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:22.517 [2024-11-20 07:11:19.792509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:22.517 [2024-11-20 07:11:19.792882] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:22.517 [2024-11-20 07:11:19.793008] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:22.517 [2024-11-20 07:11:19.793405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:22.517 [2024-11-20 07:11:19.793744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:22.517 [2024-11-20 07:11:19.793892] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:22.517 [2024-11-20 07:11:19.794321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.517 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.776 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.776 "name": "raid_bdev1", 00:14:22.776 "uuid": "77a4de9e-8bef-456d-921b-401df4050f69", 00:14:22.776 "strip_size_kb": 64, 00:14:22.776 "state": "online", 00:14:22.776 "raid_level": "raid0", 00:14:22.776 "superblock": true, 00:14:22.776 "num_base_bdevs": 4, 00:14:22.776 "num_base_bdevs_discovered": 4, 00:14:22.776 "num_base_bdevs_operational": 4, 00:14:22.776 "base_bdevs_list": [ 00:14:22.776 { 00:14:22.776 "name": "pt1", 00:14:22.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:22.776 "is_configured": true, 00:14:22.776 "data_offset": 2048, 00:14:22.776 "data_size": 63488 00:14:22.776 }, 00:14:22.776 { 00:14:22.776 "name": "pt2", 00:14:22.776 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:22.776 "is_configured": true, 00:14:22.776 "data_offset": 2048, 00:14:22.776 "data_size": 63488 00:14:22.776 }, 00:14:22.776 { 00:14:22.776 "name": "pt3", 00:14:22.776 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:22.776 "is_configured": true, 00:14:22.776 "data_offset": 2048, 00:14:22.776 "data_size": 63488 00:14:22.776 }, 00:14:22.776 { 00:14:22.776 "name": "pt4", 00:14:22.776 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:22.776 "is_configured": true, 00:14:22.776 "data_offset": 2048, 00:14:22.776 "data_size": 63488 00:14:22.776 } 00:14:22.776 ] 00:14:22.776 }' 00:14:22.776 07:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.776 07:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.035 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:23.035 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:23.035 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:23.035 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:23.035 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:23.035 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:23.035 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:23.035 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.035 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.035 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:23.035 [2024-11-20 07:11:20.286844] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.035 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.035 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:23.035 "name": "raid_bdev1", 00:14:23.035 "aliases": [ 00:14:23.035 "77a4de9e-8bef-456d-921b-401df4050f69" 00:14:23.035 ], 00:14:23.035 "product_name": "Raid Volume", 00:14:23.035 "block_size": 512, 00:14:23.035 "num_blocks": 253952, 00:14:23.035 "uuid": "77a4de9e-8bef-456d-921b-401df4050f69", 00:14:23.035 "assigned_rate_limits": { 00:14:23.035 "rw_ios_per_sec": 0, 00:14:23.035 "rw_mbytes_per_sec": 0, 00:14:23.035 "r_mbytes_per_sec": 0, 00:14:23.035 "w_mbytes_per_sec": 0 00:14:23.035 }, 00:14:23.035 "claimed": false, 00:14:23.035 "zoned": false, 00:14:23.035 "supported_io_types": { 00:14:23.035 "read": true, 00:14:23.035 "write": true, 00:14:23.035 "unmap": true, 00:14:23.035 "flush": true, 00:14:23.035 "reset": true, 00:14:23.035 "nvme_admin": false, 00:14:23.035 "nvme_io": false, 00:14:23.035 "nvme_io_md": false, 00:14:23.035 "write_zeroes": true, 00:14:23.035 "zcopy": false, 00:14:23.035 "get_zone_info": false, 00:14:23.035 "zone_management": false, 00:14:23.035 "zone_append": false, 00:14:23.035 "compare": false, 00:14:23.035 "compare_and_write": false, 00:14:23.035 "abort": false, 00:14:23.035 "seek_hole": false, 00:14:23.035 "seek_data": false, 00:14:23.035 "copy": false, 00:14:23.035 "nvme_iov_md": false 00:14:23.035 }, 00:14:23.035 "memory_domains": [ 00:14:23.035 { 00:14:23.035 "dma_device_id": "system", 00:14:23.035 "dma_device_type": 1 00:14:23.035 }, 00:14:23.035 { 00:14:23.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.035 "dma_device_type": 2 00:14:23.035 }, 00:14:23.035 { 00:14:23.035 "dma_device_id": "system", 00:14:23.035 "dma_device_type": 1 00:14:23.035 }, 00:14:23.035 { 00:14:23.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.035 "dma_device_type": 2 00:14:23.035 }, 00:14:23.035 { 00:14:23.035 "dma_device_id": "system", 00:14:23.035 "dma_device_type": 1 00:14:23.035 }, 00:14:23.035 { 00:14:23.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.035 "dma_device_type": 2 00:14:23.035 }, 00:14:23.035 { 00:14:23.036 "dma_device_id": "system", 00:14:23.036 "dma_device_type": 1 00:14:23.036 }, 00:14:23.036 { 00:14:23.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.036 "dma_device_type": 2 00:14:23.036 } 00:14:23.036 ], 00:14:23.036 "driver_specific": { 00:14:23.036 "raid": { 00:14:23.036 "uuid": "77a4de9e-8bef-456d-921b-401df4050f69", 00:14:23.036 "strip_size_kb": 64, 00:14:23.036 "state": "online", 00:14:23.036 "raid_level": "raid0", 00:14:23.036 "superblock": true, 00:14:23.036 "num_base_bdevs": 4, 00:14:23.036 "num_base_bdevs_discovered": 4, 00:14:23.036 "num_base_bdevs_operational": 4, 00:14:23.036 "base_bdevs_list": [ 00:14:23.036 { 00:14:23.036 "name": "pt1", 00:14:23.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.036 "is_configured": true, 00:14:23.036 "data_offset": 2048, 00:14:23.036 "data_size": 63488 00:14:23.036 }, 00:14:23.036 { 00:14:23.036 "name": "pt2", 00:14:23.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:23.036 "is_configured": true, 00:14:23.036 "data_offset": 2048, 00:14:23.036 "data_size": 63488 00:14:23.036 }, 00:14:23.036 { 00:14:23.036 "name": "pt3", 00:14:23.036 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:23.036 "is_configured": true, 00:14:23.036 "data_offset": 2048, 00:14:23.036 "data_size": 63488 00:14:23.036 }, 00:14:23.036 { 00:14:23.036 "name": "pt4", 00:14:23.036 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:23.036 "is_configured": true, 00:14:23.036 "data_offset": 2048, 00:14:23.036 "data_size": 63488 00:14:23.036 } 00:14:23.036 ] 00:14:23.036 } 00:14:23.036 } 00:14:23.036 }' 00:14:23.036 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:23.294 pt2 00:14:23.294 pt3 00:14:23.294 pt4' 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:23.294 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.295 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.295 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.553 [2024-11-20 07:11:20.646909] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=77a4de9e-8bef-456d-921b-401df4050f69 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 77a4de9e-8bef-456d-921b-401df4050f69 ']' 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.553 [2024-11-20 07:11:20.694577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:23.553 [2024-11-20 07:11:20.694801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:23.553 [2024-11-20 07:11:20.694996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:23.553 [2024-11-20 07:11:20.695136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:23.553 [2024-11-20 07:11:20.695169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.553 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.554 [2024-11-20 07:11:20.854665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:23.554 [2024-11-20 07:11:20.858000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:23.554 [2024-11-20 07:11:20.858089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:23.554 [2024-11-20 07:11:20.858163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:14:23.554 [2024-11-20 07:11:20.858264] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:23.554 [2024-11-20 07:11:20.858358] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:23.554 [2024-11-20 07:11:20.858407] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:23.554 [2024-11-20 07:11:20.858456] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:14:23.554 [2024-11-20 07:11:20.858488] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:23.554 [2024-11-20 07:11:20.858519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:23.554 request: 00:14:23.554 { 00:14:23.554 "name": "raid_bdev1", 00:14:23.554 "raid_level": "raid0", 00:14:23.554 "base_bdevs": [ 00:14:23.554 "malloc1", 00:14:23.554 "malloc2", 00:14:23.554 "malloc3", 00:14:23.554 "malloc4" 00:14:23.554 ], 00:14:23.554 "strip_size_kb": 64, 00:14:23.554 "superblock": false, 00:14:23.554 "method": "bdev_raid_create", 00:14:23.554 "req_id": 1 00:14:23.554 } 00:14:23.554 Got JSON-RPC error response 00:14:23.554 response: 00:14:23.554 { 00:14:23.554 "code": -17, 00:14:23.554 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:23.554 } 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.554 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.812 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.812 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:23.812 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:23.812 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:23.812 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.812 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.813 [2024-11-20 07:11:20.922811] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:23.813 [2024-11-20 07:11:20.923063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.813 [2024-11-20 07:11:20.923134] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:23.813 [2024-11-20 07:11:20.923347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.813 [2024-11-20 07:11:20.926251] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.813 [2024-11-20 07:11:20.926418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:23.813 [2024-11-20 07:11:20.926624] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:23.813 [2024-11-20 07:11:20.926819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:23.813 pt1 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.813 "name": "raid_bdev1", 00:14:23.813 "uuid": "77a4de9e-8bef-456d-921b-401df4050f69", 00:14:23.813 "strip_size_kb": 64, 00:14:23.813 "state": "configuring", 00:14:23.813 "raid_level": "raid0", 00:14:23.813 "superblock": true, 00:14:23.813 "num_base_bdevs": 4, 00:14:23.813 "num_base_bdevs_discovered": 1, 00:14:23.813 "num_base_bdevs_operational": 4, 00:14:23.813 "base_bdevs_list": [ 00:14:23.813 { 00:14:23.813 "name": "pt1", 00:14:23.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:23.813 "is_configured": true, 00:14:23.813 "data_offset": 2048, 00:14:23.813 "data_size": 63488 00:14:23.813 }, 00:14:23.813 { 00:14:23.813 "name": null, 00:14:23.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:23.813 "is_configured": false, 00:14:23.813 "data_offset": 2048, 00:14:23.813 "data_size": 63488 00:14:23.813 }, 00:14:23.813 { 00:14:23.813 "name": null, 00:14:23.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:23.813 "is_configured": false, 00:14:23.813 "data_offset": 2048, 00:14:23.813 "data_size": 63488 00:14:23.813 }, 00:14:23.813 { 00:14:23.813 "name": null, 00:14:23.813 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:23.813 "is_configured": false, 00:14:23.813 "data_offset": 2048, 00:14:23.813 "data_size": 63488 00:14:23.813 } 00:14:23.813 ] 00:14:23.813 }' 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.813 07:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.381 [2024-11-20 07:11:21.435390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:24.381 [2024-11-20 07:11:21.435679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.381 [2024-11-20 07:11:21.435755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:24.381 [2024-11-20 07:11:21.435787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.381 [2024-11-20 07:11:21.436522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.381 [2024-11-20 07:11:21.436577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:24.381 [2024-11-20 07:11:21.436720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:24.381 [2024-11-20 07:11:21.436774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:24.381 pt2 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.381 [2024-11-20 07:11:21.443381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.381 "name": "raid_bdev1", 00:14:24.381 "uuid": "77a4de9e-8bef-456d-921b-401df4050f69", 00:14:24.381 "strip_size_kb": 64, 00:14:24.381 "state": "configuring", 00:14:24.381 "raid_level": "raid0", 00:14:24.381 "superblock": true, 00:14:24.381 "num_base_bdevs": 4, 00:14:24.381 "num_base_bdevs_discovered": 1, 00:14:24.381 "num_base_bdevs_operational": 4, 00:14:24.381 "base_bdevs_list": [ 00:14:24.381 { 00:14:24.381 "name": "pt1", 00:14:24.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.381 "is_configured": true, 00:14:24.381 "data_offset": 2048, 00:14:24.381 "data_size": 63488 00:14:24.381 }, 00:14:24.381 { 00:14:24.381 "name": null, 00:14:24.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.381 "is_configured": false, 00:14:24.381 "data_offset": 0, 00:14:24.381 "data_size": 63488 00:14:24.381 }, 00:14:24.381 { 00:14:24.381 "name": null, 00:14:24.381 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.381 "is_configured": false, 00:14:24.381 "data_offset": 2048, 00:14:24.381 "data_size": 63488 00:14:24.381 }, 00:14:24.381 { 00:14:24.381 "name": null, 00:14:24.381 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:24.381 "is_configured": false, 00:14:24.381 "data_offset": 2048, 00:14:24.381 "data_size": 63488 00:14:24.381 } 00:14:24.381 ] 00:14:24.381 }' 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.381 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.640 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:24.640 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:24.640 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:24.640 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.640 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.640 [2024-11-20 07:11:21.947532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:24.640 [2024-11-20 07:11:21.947798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.640 [2024-11-20 07:11:21.948012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:24.640 [2024-11-20 07:11:21.948050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.640 [2024-11-20 07:11:21.948798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.640 [2024-11-20 07:11:21.948846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:24.640 [2024-11-20 07:11:21.949175] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:24.640 [2024-11-20 07:11:21.949355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:24.640 pt2 00:14:24.640 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.640 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:24.640 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:24.640 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:24.640 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.640 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.640 [2024-11-20 07:11:21.955476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:24.640 [2024-11-20 07:11:21.955698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.640 [2024-11-20 07:11:21.955817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:24.640 [2024-11-20 07:11:21.956090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.640 [2024-11-20 07:11:21.956691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.640 [2024-11-20 07:11:21.956739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:24.640 [2024-11-20 07:11:21.956859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:24.640 [2024-11-20 07:11:21.957107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:24.899 pt3 00:14:24.899 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.899 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:24.899 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:24.899 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:24.899 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.899 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.899 [2024-11-20 07:11:21.963449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:24.899 [2024-11-20 07:11:21.963703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.899 [2024-11-20 07:11:21.963767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:24.899 [2024-11-20 07:11:21.963790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.899 [2024-11-20 07:11:21.964406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.899 [2024-11-20 07:11:21.964448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:24.899 [2024-11-20 07:11:21.964559] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:24.899 [2024-11-20 07:11:21.964596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:24.899 [2024-11-20 07:11:21.964828] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:24.899 [2024-11-20 07:11:21.964859] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:24.899 [2024-11-20 07:11:21.965284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:24.899 [2024-11-20 07:11:21.965549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:24.899 [2024-11-20 07:11:21.965579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:24.900 [2024-11-20 07:11:21.965833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.900 pt4 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.900 07:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.900 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.900 "name": "raid_bdev1", 00:14:24.900 "uuid": "77a4de9e-8bef-456d-921b-401df4050f69", 00:14:24.900 "strip_size_kb": 64, 00:14:24.900 "state": "online", 00:14:24.900 "raid_level": "raid0", 00:14:24.900 "superblock": true, 00:14:24.900 "num_base_bdevs": 4, 00:14:24.900 "num_base_bdevs_discovered": 4, 00:14:24.900 "num_base_bdevs_operational": 4, 00:14:24.900 "base_bdevs_list": [ 00:14:24.900 { 00:14:24.900 "name": "pt1", 00:14:24.900 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:24.900 "is_configured": true, 00:14:24.900 "data_offset": 2048, 00:14:24.900 "data_size": 63488 00:14:24.900 }, 00:14:24.900 { 00:14:24.900 "name": "pt2", 00:14:24.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:24.900 "is_configured": true, 00:14:24.900 "data_offset": 2048, 00:14:24.900 "data_size": 63488 00:14:24.900 }, 00:14:24.900 { 00:14:24.900 "name": "pt3", 00:14:24.900 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:24.900 "is_configured": true, 00:14:24.900 "data_offset": 2048, 00:14:24.900 "data_size": 63488 00:14:24.900 }, 00:14:24.900 { 00:14:24.900 "name": "pt4", 00:14:24.900 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:24.900 "is_configured": true, 00:14:24.900 "data_offset": 2048, 00:14:24.900 "data_size": 63488 00:14:24.900 } 00:14:24.900 ] 00:14:24.900 }' 00:14:24.900 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.900 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.162 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:25.162 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:25.162 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:25.162 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:25.162 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:25.162 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:25.162 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:25.162 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.162 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:25.162 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.162 [2024-11-20 07:11:22.472072] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:25.422 "name": "raid_bdev1", 00:14:25.422 "aliases": [ 00:14:25.422 "77a4de9e-8bef-456d-921b-401df4050f69" 00:14:25.422 ], 00:14:25.422 "product_name": "Raid Volume", 00:14:25.422 "block_size": 512, 00:14:25.422 "num_blocks": 253952, 00:14:25.422 "uuid": "77a4de9e-8bef-456d-921b-401df4050f69", 00:14:25.422 "assigned_rate_limits": { 00:14:25.422 "rw_ios_per_sec": 0, 00:14:25.422 "rw_mbytes_per_sec": 0, 00:14:25.422 "r_mbytes_per_sec": 0, 00:14:25.422 "w_mbytes_per_sec": 0 00:14:25.422 }, 00:14:25.422 "claimed": false, 00:14:25.422 "zoned": false, 00:14:25.422 "supported_io_types": { 00:14:25.422 "read": true, 00:14:25.422 "write": true, 00:14:25.422 "unmap": true, 00:14:25.422 "flush": true, 00:14:25.422 "reset": true, 00:14:25.422 "nvme_admin": false, 00:14:25.422 "nvme_io": false, 00:14:25.422 "nvme_io_md": false, 00:14:25.422 "write_zeroes": true, 00:14:25.422 "zcopy": false, 00:14:25.422 "get_zone_info": false, 00:14:25.422 "zone_management": false, 00:14:25.422 "zone_append": false, 00:14:25.422 "compare": false, 00:14:25.422 "compare_and_write": false, 00:14:25.422 "abort": false, 00:14:25.422 "seek_hole": false, 00:14:25.422 "seek_data": false, 00:14:25.422 "copy": false, 00:14:25.422 "nvme_iov_md": false 00:14:25.422 }, 00:14:25.422 "memory_domains": [ 00:14:25.422 { 00:14:25.422 "dma_device_id": "system", 00:14:25.422 "dma_device_type": 1 00:14:25.422 }, 00:14:25.422 { 00:14:25.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.422 "dma_device_type": 2 00:14:25.422 }, 00:14:25.422 { 00:14:25.422 "dma_device_id": "system", 00:14:25.422 "dma_device_type": 1 00:14:25.422 }, 00:14:25.422 { 00:14:25.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.422 "dma_device_type": 2 00:14:25.422 }, 00:14:25.422 { 00:14:25.422 "dma_device_id": "system", 00:14:25.422 "dma_device_type": 1 00:14:25.422 }, 00:14:25.422 { 00:14:25.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.422 "dma_device_type": 2 00:14:25.422 }, 00:14:25.422 { 00:14:25.422 "dma_device_id": "system", 00:14:25.422 "dma_device_type": 1 00:14:25.422 }, 00:14:25.422 { 00:14:25.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.422 "dma_device_type": 2 00:14:25.422 } 00:14:25.422 ], 00:14:25.422 "driver_specific": { 00:14:25.422 "raid": { 00:14:25.422 "uuid": "77a4de9e-8bef-456d-921b-401df4050f69", 00:14:25.422 "strip_size_kb": 64, 00:14:25.422 "state": "online", 00:14:25.422 "raid_level": "raid0", 00:14:25.422 "superblock": true, 00:14:25.422 "num_base_bdevs": 4, 00:14:25.422 "num_base_bdevs_discovered": 4, 00:14:25.422 "num_base_bdevs_operational": 4, 00:14:25.422 "base_bdevs_list": [ 00:14:25.422 { 00:14:25.422 "name": "pt1", 00:14:25.422 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:25.422 "is_configured": true, 00:14:25.422 "data_offset": 2048, 00:14:25.422 "data_size": 63488 00:14:25.422 }, 00:14:25.422 { 00:14:25.422 "name": "pt2", 00:14:25.422 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:25.422 "is_configured": true, 00:14:25.422 "data_offset": 2048, 00:14:25.422 "data_size": 63488 00:14:25.422 }, 00:14:25.422 { 00:14:25.422 "name": "pt3", 00:14:25.422 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:25.422 "is_configured": true, 00:14:25.422 "data_offset": 2048, 00:14:25.422 "data_size": 63488 00:14:25.422 }, 00:14:25.422 { 00:14:25.422 "name": "pt4", 00:14:25.422 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:25.422 "is_configured": true, 00:14:25.422 "data_offset": 2048, 00:14:25.422 "data_size": 63488 00:14:25.422 } 00:14:25.422 ] 00:14:25.422 } 00:14:25.422 } 00:14:25.422 }' 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:25.422 pt2 00:14:25.422 pt3 00:14:25.422 pt4' 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.422 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.681 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.681 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.681 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.681 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:25.681 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:25.681 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.681 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:25.681 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.681 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.681 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:25.681 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:25.681 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.682 [2024-11-20 07:11:22.852105] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 77a4de9e-8bef-456d-921b-401df4050f69 '!=' 77a4de9e-8bef-456d-921b-401df4050f69 ']' 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70755 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70755 ']' 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70755 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70755 00:14:25.682 killing process with pid 70755 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70755' 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70755 00:14:25.682 [2024-11-20 07:11:22.932664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.682 07:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70755 00:14:25.682 [2024-11-20 07:11:22.932774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.682 [2024-11-20 07:11:22.932888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.682 [2024-11-20 07:11:22.932906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:26.257 [2024-11-20 07:11:23.284880] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:27.193 07:11:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:27.193 00:14:27.193 real 0m5.826s 00:14:27.193 user 0m8.775s 00:14:27.193 sys 0m0.828s 00:14:27.193 07:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.193 07:11:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.193 ************************************ 00:14:27.193 END TEST raid_superblock_test 00:14:27.193 ************************************ 00:14:27.193 07:11:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:14:27.193 07:11:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:27.193 07:11:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.193 07:11:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:27.193 ************************************ 00:14:27.193 START TEST raid_read_error_test 00:14:27.193 ************************************ 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.k0QVisSKuO 00:14:27.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71019 00:14:27.193 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71019 00:14:27.194 07:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:27.194 07:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71019 ']' 00:14:27.194 07:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.194 07:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:27.194 07:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.194 07:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:27.194 07:11:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.194 [2024-11-20 07:11:24.463289] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:14:27.194 [2024-11-20 07:11:24.463441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71019 ] 00:14:27.452 [2024-11-20 07:11:24.642510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.709 [2024-11-20 07:11:24.825497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.967 [2024-11-20 07:11:25.030304] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.967 [2024-11-20 07:11:25.030380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.226 BaseBdev1_malloc 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.226 true 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.226 [2024-11-20 07:11:25.538207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:28.226 [2024-11-20 07:11:25.538275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.226 [2024-11-20 07:11:25.538304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:28.226 [2024-11-20 07:11:25.538321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.226 [2024-11-20 07:11:25.541186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.226 [2024-11-20 07:11:25.541237] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:28.226 BaseBdev1 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.226 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.485 BaseBdev2_malloc 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.485 true 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.485 [2024-11-20 07:11:25.598250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:28.485 [2024-11-20 07:11:25.598338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.485 [2024-11-20 07:11:25.598374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:28.485 [2024-11-20 07:11:25.598400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.485 [2024-11-20 07:11:25.601930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.485 [2024-11-20 07:11:25.601992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:28.485 BaseBdev2 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.485 BaseBdev3_malloc 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.485 true 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.485 [2024-11-20 07:11:25.678406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:28.485 [2024-11-20 07:11:25.678473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.485 [2024-11-20 07:11:25.678499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:28.485 [2024-11-20 07:11:25.678516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.485 [2024-11-20 07:11:25.681346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.485 [2024-11-20 07:11:25.681518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:28.485 BaseBdev3 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.485 BaseBdev4_malloc 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.485 true 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.485 [2024-11-20 07:11:25.739778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:28.485 [2024-11-20 07:11:25.739846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.485 [2024-11-20 07:11:25.739886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:28.485 [2024-11-20 07:11:25.739913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.485 [2024-11-20 07:11:25.742631] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.485 [2024-11-20 07:11:25.742688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:28.485 BaseBdev4 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.485 [2024-11-20 07:11:25.747855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:28.485 [2024-11-20 07:11:25.750327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.485 [2024-11-20 07:11:25.750443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:28.485 [2024-11-20 07:11:25.750548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:28.485 [2024-11-20 07:11:25.750859] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:28.485 [2024-11-20 07:11:25.750910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:28.485 [2024-11-20 07:11:25.751240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:28.485 [2024-11-20 07:11:25.751476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:28.485 [2024-11-20 07:11:25.751503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:28.485 [2024-11-20 07:11:25.751763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:28.485 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:28.486 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:28.486 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.486 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:28.486 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.486 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.486 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.486 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.486 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.486 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.486 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.486 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.486 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.744 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.744 "name": "raid_bdev1", 00:14:28.744 "uuid": "cc700633-25ca-492b-a640-bac85f80ecdc", 00:14:28.744 "strip_size_kb": 64, 00:14:28.744 "state": "online", 00:14:28.744 "raid_level": "raid0", 00:14:28.744 "superblock": true, 00:14:28.744 "num_base_bdevs": 4, 00:14:28.744 "num_base_bdevs_discovered": 4, 00:14:28.744 "num_base_bdevs_operational": 4, 00:14:28.744 "base_bdevs_list": [ 00:14:28.744 { 00:14:28.744 "name": "BaseBdev1", 00:14:28.744 "uuid": "fefe1dc2-a1e6-5f59-8c24-c5806e603443", 00:14:28.744 "is_configured": true, 00:14:28.744 "data_offset": 2048, 00:14:28.744 "data_size": 63488 00:14:28.744 }, 00:14:28.744 { 00:14:28.744 "name": "BaseBdev2", 00:14:28.744 "uuid": "021da2a3-8e84-5976-bae3-0fee5bbcc830", 00:14:28.744 "is_configured": true, 00:14:28.744 "data_offset": 2048, 00:14:28.744 "data_size": 63488 00:14:28.744 }, 00:14:28.744 { 00:14:28.744 "name": "BaseBdev3", 00:14:28.744 "uuid": "323d8b11-cc40-5f87-9afe-a20e5a9bd093", 00:14:28.744 "is_configured": true, 00:14:28.744 "data_offset": 2048, 00:14:28.744 "data_size": 63488 00:14:28.744 }, 00:14:28.744 { 00:14:28.744 "name": "BaseBdev4", 00:14:28.744 "uuid": "8d6bc15f-aac5-5f7e-acf7-a1b0635caba4", 00:14:28.744 "is_configured": true, 00:14:28.744 "data_offset": 2048, 00:14:28.744 "data_size": 63488 00:14:28.744 } 00:14:28.744 ] 00:14:28.744 }' 00:14:28.744 07:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.744 07:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.002 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:29.002 07:11:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:29.261 [2024-11-20 07:11:26.385396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.194 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.194 "name": "raid_bdev1", 00:14:30.194 "uuid": "cc700633-25ca-492b-a640-bac85f80ecdc", 00:14:30.194 "strip_size_kb": 64, 00:14:30.194 "state": "online", 00:14:30.194 "raid_level": "raid0", 00:14:30.194 "superblock": true, 00:14:30.194 "num_base_bdevs": 4, 00:14:30.194 "num_base_bdevs_discovered": 4, 00:14:30.194 "num_base_bdevs_operational": 4, 00:14:30.194 "base_bdevs_list": [ 00:14:30.194 { 00:14:30.194 "name": "BaseBdev1", 00:14:30.194 "uuid": "fefe1dc2-a1e6-5f59-8c24-c5806e603443", 00:14:30.194 "is_configured": true, 00:14:30.195 "data_offset": 2048, 00:14:30.195 "data_size": 63488 00:14:30.195 }, 00:14:30.195 { 00:14:30.195 "name": "BaseBdev2", 00:14:30.195 "uuid": "021da2a3-8e84-5976-bae3-0fee5bbcc830", 00:14:30.195 "is_configured": true, 00:14:30.195 "data_offset": 2048, 00:14:30.195 "data_size": 63488 00:14:30.195 }, 00:14:30.195 { 00:14:30.195 "name": "BaseBdev3", 00:14:30.195 "uuid": "323d8b11-cc40-5f87-9afe-a20e5a9bd093", 00:14:30.195 "is_configured": true, 00:14:30.195 "data_offset": 2048, 00:14:30.195 "data_size": 63488 00:14:30.195 }, 00:14:30.195 { 00:14:30.195 "name": "BaseBdev4", 00:14:30.195 "uuid": "8d6bc15f-aac5-5f7e-acf7-a1b0635caba4", 00:14:30.195 "is_configured": true, 00:14:30.195 "data_offset": 2048, 00:14:30.195 "data_size": 63488 00:14:30.195 } 00:14:30.195 ] 00:14:30.195 }' 00:14:30.195 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.195 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.452 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:30.452 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.452 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.711 [2024-11-20 07:11:27.772362] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.711 [2024-11-20 07:11:27.772404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.711 [2024-11-20 07:11:27.775753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.711 [2024-11-20 07:11:27.775834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.711 [2024-11-20 07:11:27.775929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.711 [2024-11-20 07:11:27.775953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:30.711 { 00:14:30.711 "results": [ 00:14:30.711 { 00:14:30.711 "job": "raid_bdev1", 00:14:30.711 "core_mask": "0x1", 00:14:30.711 "workload": "randrw", 00:14:30.711 "percentage": 50, 00:14:30.711 "status": "finished", 00:14:30.711 "queue_depth": 1, 00:14:30.711 "io_size": 131072, 00:14:30.711 "runtime": 1.384529, 00:14:30.711 "iops": 10454.81893120332, 00:14:30.711 "mibps": 1306.852366400415, 00:14:30.711 "io_failed": 1, 00:14:30.711 "io_timeout": 0, 00:14:30.711 "avg_latency_us": 132.62212665477657, 00:14:30.711 "min_latency_us": 42.123636363636365, 00:14:30.711 "max_latency_us": 1809.6872727272728 00:14:30.711 } 00:14:30.711 ], 00:14:30.711 "core_count": 1 00:14:30.711 } 00:14:30.711 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.711 07:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71019 00:14:30.711 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71019 ']' 00:14:30.711 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71019 00:14:30.711 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:14:30.711 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.711 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71019 00:14:30.711 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.711 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.711 killing process with pid 71019 00:14:30.711 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71019' 00:14:30.711 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71019 00:14:30.711 [2024-11-20 07:11:27.809691] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:30.711 07:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71019 00:14:30.969 [2024-11-20 07:11:28.093981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.903 07:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.k0QVisSKuO 00:14:31.903 07:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:31.903 07:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:31.903 07:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:14:31.903 07:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:31.903 07:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:31.903 07:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:31.903 07:11:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:14:31.903 00:14:31.903 real 0m4.824s 00:14:31.903 user 0m5.975s 00:14:31.903 sys 0m0.583s 00:14:31.903 07:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.903 07:11:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.903 ************************************ 00:14:31.903 END TEST raid_read_error_test 00:14:31.903 ************************************ 00:14:32.161 07:11:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:14:32.161 07:11:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:32.161 07:11:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:32.161 07:11:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.161 ************************************ 00:14:32.161 START TEST raid_write_error_test 00:14:32.161 ************************************ 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rSBnKnXjwD 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71170 00:14:32.161 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:32.162 07:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71170 00:14:32.162 07:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71170 ']' 00:14:32.162 07:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.162 07:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.162 07:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.162 07:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.162 07:11:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.162 [2024-11-20 07:11:29.331498] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:14:32.162 [2024-11-20 07:11:29.331653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71170 ] 00:14:32.420 [2024-11-20 07:11:29.507528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.420 [2024-11-20 07:11:29.636680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.678 [2024-11-20 07:11:29.838875] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.678 [2024-11-20 07:11:29.838921] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.245 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.245 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:33.245 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:33.245 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:33.245 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.245 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.245 BaseBdev1_malloc 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 true 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 [2024-11-20 07:11:30.362962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:33.246 [2024-11-20 07:11:30.363034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.246 [2024-11-20 07:11:30.363066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:33.246 [2024-11-20 07:11:30.363084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.246 [2024-11-20 07:11:30.365917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.246 [2024-11-20 07:11:30.365971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:33.246 BaseBdev1 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 BaseBdev2_malloc 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 true 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 [2024-11-20 07:11:30.418720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:33.246 [2024-11-20 07:11:30.418791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.246 [2024-11-20 07:11:30.418818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:33.246 [2024-11-20 07:11:30.418835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.246 [2024-11-20 07:11:30.421671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.246 [2024-11-20 07:11:30.421724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:33.246 BaseBdev2 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 BaseBdev3_malloc 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 true 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 [2024-11-20 07:11:30.484320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:33.246 [2024-11-20 07:11:30.484397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.246 [2024-11-20 07:11:30.484427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:33.246 [2024-11-20 07:11:30.484445] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.246 [2024-11-20 07:11:30.487265] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.246 [2024-11-20 07:11:30.487315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:33.246 BaseBdev3 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 BaseBdev4_malloc 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 true 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 [2024-11-20 07:11:30.540282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:33.246 [2024-11-20 07:11:30.540352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.246 [2024-11-20 07:11:30.540382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:33.246 [2024-11-20 07:11:30.540400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.246 [2024-11-20 07:11:30.543159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.246 [2024-11-20 07:11:30.543211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:33.246 BaseBdev4 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 [2024-11-20 07:11:30.548342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.246 [2024-11-20 07:11:30.550755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.246 [2024-11-20 07:11:30.550899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.246 [2024-11-20 07:11:30.551013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:33.246 [2024-11-20 07:11:30.551296] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:33.246 [2024-11-20 07:11:30.551334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:33.246 [2024-11-20 07:11:30.551642] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:33.246 [2024-11-20 07:11:30.551914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:33.246 [2024-11-20 07:11:30.551942] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:33.246 [2024-11-20 07:11:30.552141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.246 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.506 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.506 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.506 "name": "raid_bdev1", 00:14:33.506 "uuid": "ffa1a4a4-3e6a-4399-ac6f-e5cacee541f8", 00:14:33.506 "strip_size_kb": 64, 00:14:33.506 "state": "online", 00:14:33.506 "raid_level": "raid0", 00:14:33.506 "superblock": true, 00:14:33.506 "num_base_bdevs": 4, 00:14:33.506 "num_base_bdevs_discovered": 4, 00:14:33.506 "num_base_bdevs_operational": 4, 00:14:33.506 "base_bdevs_list": [ 00:14:33.506 { 00:14:33.506 "name": "BaseBdev1", 00:14:33.506 "uuid": "287155b8-ce9c-554b-ab8c-df74c50dfa50", 00:14:33.506 "is_configured": true, 00:14:33.506 "data_offset": 2048, 00:14:33.506 "data_size": 63488 00:14:33.506 }, 00:14:33.506 { 00:14:33.506 "name": "BaseBdev2", 00:14:33.506 "uuid": "f9cb8b70-0a16-5d14-9615-d35bfe47077d", 00:14:33.506 "is_configured": true, 00:14:33.506 "data_offset": 2048, 00:14:33.506 "data_size": 63488 00:14:33.506 }, 00:14:33.506 { 00:14:33.506 "name": "BaseBdev3", 00:14:33.506 "uuid": "f17bf8fe-c553-5dd3-80aa-6e7a901e379a", 00:14:33.506 "is_configured": true, 00:14:33.506 "data_offset": 2048, 00:14:33.506 "data_size": 63488 00:14:33.506 }, 00:14:33.506 { 00:14:33.506 "name": "BaseBdev4", 00:14:33.506 "uuid": "053562e4-c868-54a0-a92a-244b2d24eec2", 00:14:33.506 "is_configured": true, 00:14:33.506 "data_offset": 2048, 00:14:33.506 "data_size": 63488 00:14:33.506 } 00:14:33.506 ] 00:14:33.506 }' 00:14:33.506 07:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.506 07:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.764 07:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:33.764 07:11:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:34.022 [2024-11-20 07:11:31.197919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.957 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.957 "name": "raid_bdev1", 00:14:34.957 "uuid": "ffa1a4a4-3e6a-4399-ac6f-e5cacee541f8", 00:14:34.957 "strip_size_kb": 64, 00:14:34.957 "state": "online", 00:14:34.957 "raid_level": "raid0", 00:14:34.957 "superblock": true, 00:14:34.957 "num_base_bdevs": 4, 00:14:34.957 "num_base_bdevs_discovered": 4, 00:14:34.957 "num_base_bdevs_operational": 4, 00:14:34.957 "base_bdevs_list": [ 00:14:34.957 { 00:14:34.957 "name": "BaseBdev1", 00:14:34.957 "uuid": "287155b8-ce9c-554b-ab8c-df74c50dfa50", 00:14:34.957 "is_configured": true, 00:14:34.957 "data_offset": 2048, 00:14:34.957 "data_size": 63488 00:14:34.957 }, 00:14:34.957 { 00:14:34.957 "name": "BaseBdev2", 00:14:34.957 "uuid": "f9cb8b70-0a16-5d14-9615-d35bfe47077d", 00:14:34.957 "is_configured": true, 00:14:34.957 "data_offset": 2048, 00:14:34.957 "data_size": 63488 00:14:34.957 }, 00:14:34.957 { 00:14:34.957 "name": "BaseBdev3", 00:14:34.957 "uuid": "f17bf8fe-c553-5dd3-80aa-6e7a901e379a", 00:14:34.957 "is_configured": true, 00:14:34.957 "data_offset": 2048, 00:14:34.957 "data_size": 63488 00:14:34.957 }, 00:14:34.957 { 00:14:34.957 "name": "BaseBdev4", 00:14:34.957 "uuid": "053562e4-c868-54a0-a92a-244b2d24eec2", 00:14:34.957 "is_configured": true, 00:14:34.958 "data_offset": 2048, 00:14:34.958 "data_size": 63488 00:14:34.958 } 00:14:34.958 ] 00:14:34.958 }' 00:14:34.958 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.958 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.525 [2024-11-20 07:11:32.609333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.525 [2024-11-20 07:11:32.609375] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.525 [2024-11-20 07:11:32.612774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.525 [2024-11-20 07:11:32.612860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.525 [2024-11-20 07:11:32.612950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.525 [2024-11-20 07:11:32.612972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:35.525 { 00:14:35.525 "results": [ 00:14:35.525 { 00:14:35.525 "job": "raid_bdev1", 00:14:35.525 "core_mask": "0x1", 00:14:35.525 "workload": "randrw", 00:14:35.525 "percentage": 50, 00:14:35.525 "status": "finished", 00:14:35.525 "queue_depth": 1, 00:14:35.525 "io_size": 131072, 00:14:35.525 "runtime": 1.409006, 00:14:35.525 "iops": 10784.19822200899, 00:14:35.525 "mibps": 1348.0247777511238, 00:14:35.525 "io_failed": 1, 00:14:35.525 "io_timeout": 0, 00:14:35.525 "avg_latency_us": 129.72876881475986, 00:14:35.525 "min_latency_us": 42.35636363636364, 00:14:35.525 "max_latency_us": 1861.8181818181818 00:14:35.525 } 00:14:35.525 ], 00:14:35.525 "core_count": 1 00:14:35.525 } 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71170 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71170 ']' 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71170 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71170 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:35.525 killing process with pid 71170 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71170' 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71170 00:14:35.525 07:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71170 00:14:35.525 [2024-11-20 07:11:32.645376] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:35.785 [2024-11-20 07:11:32.935413] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.733 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rSBnKnXjwD 00:14:36.733 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:36.733 07:11:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:36.733 07:11:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:14:36.733 07:11:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:14:36.733 07:11:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:36.733 07:11:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:36.733 07:11:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:14:36.733 00:14:36.733 real 0m4.774s 00:14:36.733 user 0m5.917s 00:14:36.733 sys 0m0.552s 00:14:36.733 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.733 ************************************ 00:14:36.733 END TEST raid_write_error_test 00:14:36.733 07:11:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.733 ************************************ 00:14:36.991 07:11:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:14:36.991 07:11:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:14:36.991 07:11:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:36.991 07:11:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:36.991 07:11:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.991 ************************************ 00:14:36.991 START TEST raid_state_function_test 00:14:36.991 ************************************ 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:36.991 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71314 00:14:36.992 Process raid pid: 71314 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71314' 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71314 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71314 ']' 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.992 07:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.992 [2024-11-20 07:11:34.178936] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:14:36.992 [2024-11-20 07:11:34.179680] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.250 [2024-11-20 07:11:34.367071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.250 [2024-11-20 07:11:34.497898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.508 [2024-11-20 07:11:34.704807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.508 [2024-11-20 07:11:34.704861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.074 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.074 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:38.074 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:38.074 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.074 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.074 [2024-11-20 07:11:35.176004] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:38.074 [2024-11-20 07:11:35.176069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:38.074 [2024-11-20 07:11:35.176087] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:38.074 [2024-11-20 07:11:35.176103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:38.074 [2024-11-20 07:11:35.176113] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:38.074 [2024-11-20 07:11:35.176126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:38.074 [2024-11-20 07:11:35.176136] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:38.074 [2024-11-20 07:11:35.176159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:38.074 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.074 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:38.074 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.074 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.074 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:38.074 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.075 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.075 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.075 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.075 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.075 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.075 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.075 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.075 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.075 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.075 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.075 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.075 "name": "Existed_Raid", 00:14:38.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.075 "strip_size_kb": 64, 00:14:38.075 "state": "configuring", 00:14:38.075 "raid_level": "concat", 00:14:38.075 "superblock": false, 00:14:38.075 "num_base_bdevs": 4, 00:14:38.075 "num_base_bdevs_discovered": 0, 00:14:38.075 "num_base_bdevs_operational": 4, 00:14:38.075 "base_bdevs_list": [ 00:14:38.075 { 00:14:38.075 "name": "BaseBdev1", 00:14:38.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.075 "is_configured": false, 00:14:38.075 "data_offset": 0, 00:14:38.075 "data_size": 0 00:14:38.075 }, 00:14:38.075 { 00:14:38.075 "name": "BaseBdev2", 00:14:38.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.075 "is_configured": false, 00:14:38.075 "data_offset": 0, 00:14:38.075 "data_size": 0 00:14:38.075 }, 00:14:38.075 { 00:14:38.075 "name": "BaseBdev3", 00:14:38.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.075 "is_configured": false, 00:14:38.075 "data_offset": 0, 00:14:38.075 "data_size": 0 00:14:38.075 }, 00:14:38.075 { 00:14:38.075 "name": "BaseBdev4", 00:14:38.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.075 "is_configured": false, 00:14:38.075 "data_offset": 0, 00:14:38.075 "data_size": 0 00:14:38.075 } 00:14:38.075 ] 00:14:38.075 }' 00:14:38.075 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.075 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.642 [2024-11-20 07:11:35.684082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:38.642 [2024-11-20 07:11:35.684143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.642 [2024-11-20 07:11:35.692073] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:38.642 [2024-11-20 07:11:35.692131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:38.642 [2024-11-20 07:11:35.692148] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:38.642 [2024-11-20 07:11:35.692164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:38.642 [2024-11-20 07:11:35.692174] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:38.642 [2024-11-20 07:11:35.692188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:38.642 [2024-11-20 07:11:35.692198] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:38.642 [2024-11-20 07:11:35.692212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.642 [2024-11-20 07:11:35.736636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:38.642 BaseBdev1 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.642 [ 00:14:38.642 { 00:14:38.642 "name": "BaseBdev1", 00:14:38.642 "aliases": [ 00:14:38.642 "67c3a0d0-fe90-49b4-bd21-7ab16740cb4f" 00:14:38.642 ], 00:14:38.642 "product_name": "Malloc disk", 00:14:38.642 "block_size": 512, 00:14:38.642 "num_blocks": 65536, 00:14:38.642 "uuid": "67c3a0d0-fe90-49b4-bd21-7ab16740cb4f", 00:14:38.642 "assigned_rate_limits": { 00:14:38.642 "rw_ios_per_sec": 0, 00:14:38.642 "rw_mbytes_per_sec": 0, 00:14:38.642 "r_mbytes_per_sec": 0, 00:14:38.642 "w_mbytes_per_sec": 0 00:14:38.642 }, 00:14:38.642 "claimed": true, 00:14:38.642 "claim_type": "exclusive_write", 00:14:38.642 "zoned": false, 00:14:38.642 "supported_io_types": { 00:14:38.642 "read": true, 00:14:38.642 "write": true, 00:14:38.642 "unmap": true, 00:14:38.642 "flush": true, 00:14:38.642 "reset": true, 00:14:38.642 "nvme_admin": false, 00:14:38.642 "nvme_io": false, 00:14:38.642 "nvme_io_md": false, 00:14:38.642 "write_zeroes": true, 00:14:38.642 "zcopy": true, 00:14:38.642 "get_zone_info": false, 00:14:38.642 "zone_management": false, 00:14:38.642 "zone_append": false, 00:14:38.642 "compare": false, 00:14:38.642 "compare_and_write": false, 00:14:38.642 "abort": true, 00:14:38.642 "seek_hole": false, 00:14:38.642 "seek_data": false, 00:14:38.642 "copy": true, 00:14:38.642 "nvme_iov_md": false 00:14:38.642 }, 00:14:38.642 "memory_domains": [ 00:14:38.642 { 00:14:38.642 "dma_device_id": "system", 00:14:38.642 "dma_device_type": 1 00:14:38.642 }, 00:14:38.642 { 00:14:38.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.642 "dma_device_type": 2 00:14:38.642 } 00:14:38.642 ], 00:14:38.642 "driver_specific": {} 00:14:38.642 } 00:14:38.642 ] 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.642 "name": "Existed_Raid", 00:14:38.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.642 "strip_size_kb": 64, 00:14:38.642 "state": "configuring", 00:14:38.642 "raid_level": "concat", 00:14:38.642 "superblock": false, 00:14:38.642 "num_base_bdevs": 4, 00:14:38.642 "num_base_bdevs_discovered": 1, 00:14:38.642 "num_base_bdevs_operational": 4, 00:14:38.642 "base_bdevs_list": [ 00:14:38.642 { 00:14:38.642 "name": "BaseBdev1", 00:14:38.642 "uuid": "67c3a0d0-fe90-49b4-bd21-7ab16740cb4f", 00:14:38.642 "is_configured": true, 00:14:38.642 "data_offset": 0, 00:14:38.642 "data_size": 65536 00:14:38.642 }, 00:14:38.642 { 00:14:38.642 "name": "BaseBdev2", 00:14:38.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.642 "is_configured": false, 00:14:38.642 "data_offset": 0, 00:14:38.642 "data_size": 0 00:14:38.642 }, 00:14:38.642 { 00:14:38.642 "name": "BaseBdev3", 00:14:38.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.642 "is_configured": false, 00:14:38.642 "data_offset": 0, 00:14:38.642 "data_size": 0 00:14:38.642 }, 00:14:38.642 { 00:14:38.642 "name": "BaseBdev4", 00:14:38.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.642 "is_configured": false, 00:14:38.642 "data_offset": 0, 00:14:38.642 "data_size": 0 00:14:38.642 } 00:14:38.642 ] 00:14:38.642 }' 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.642 07:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.211 [2024-11-20 07:11:36.292853] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:39.211 [2024-11-20 07:11:36.292946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.211 [2024-11-20 07:11:36.300931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.211 [2024-11-20 07:11:36.303366] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:39.211 [2024-11-20 07:11:36.303422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:39.211 [2024-11-20 07:11:36.303438] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:39.211 [2024-11-20 07:11:36.303455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:39.211 [2024-11-20 07:11:36.303466] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:39.211 [2024-11-20 07:11:36.303479] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.211 "name": "Existed_Raid", 00:14:39.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.211 "strip_size_kb": 64, 00:14:39.211 "state": "configuring", 00:14:39.211 "raid_level": "concat", 00:14:39.211 "superblock": false, 00:14:39.211 "num_base_bdevs": 4, 00:14:39.211 "num_base_bdevs_discovered": 1, 00:14:39.211 "num_base_bdevs_operational": 4, 00:14:39.211 "base_bdevs_list": [ 00:14:39.211 { 00:14:39.211 "name": "BaseBdev1", 00:14:39.211 "uuid": "67c3a0d0-fe90-49b4-bd21-7ab16740cb4f", 00:14:39.211 "is_configured": true, 00:14:39.211 "data_offset": 0, 00:14:39.211 "data_size": 65536 00:14:39.211 }, 00:14:39.211 { 00:14:39.211 "name": "BaseBdev2", 00:14:39.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.211 "is_configured": false, 00:14:39.211 "data_offset": 0, 00:14:39.211 "data_size": 0 00:14:39.211 }, 00:14:39.211 { 00:14:39.211 "name": "BaseBdev3", 00:14:39.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.211 "is_configured": false, 00:14:39.211 "data_offset": 0, 00:14:39.211 "data_size": 0 00:14:39.211 }, 00:14:39.211 { 00:14:39.211 "name": "BaseBdev4", 00:14:39.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.211 "is_configured": false, 00:14:39.211 "data_offset": 0, 00:14:39.211 "data_size": 0 00:14:39.211 } 00:14:39.211 ] 00:14:39.211 }' 00:14:39.211 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.212 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.779 [2024-11-20 07:11:36.887784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.779 BaseBdev2 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.779 [ 00:14:39.779 { 00:14:39.779 "name": "BaseBdev2", 00:14:39.779 "aliases": [ 00:14:39.779 "51d1b109-aee3-4096-8e77-90a36bb4fd5c" 00:14:39.779 ], 00:14:39.779 "product_name": "Malloc disk", 00:14:39.779 "block_size": 512, 00:14:39.779 "num_blocks": 65536, 00:14:39.779 "uuid": "51d1b109-aee3-4096-8e77-90a36bb4fd5c", 00:14:39.779 "assigned_rate_limits": { 00:14:39.779 "rw_ios_per_sec": 0, 00:14:39.779 "rw_mbytes_per_sec": 0, 00:14:39.779 "r_mbytes_per_sec": 0, 00:14:39.779 "w_mbytes_per_sec": 0 00:14:39.779 }, 00:14:39.779 "claimed": true, 00:14:39.779 "claim_type": "exclusive_write", 00:14:39.779 "zoned": false, 00:14:39.779 "supported_io_types": { 00:14:39.779 "read": true, 00:14:39.779 "write": true, 00:14:39.779 "unmap": true, 00:14:39.779 "flush": true, 00:14:39.779 "reset": true, 00:14:39.779 "nvme_admin": false, 00:14:39.779 "nvme_io": false, 00:14:39.779 "nvme_io_md": false, 00:14:39.779 "write_zeroes": true, 00:14:39.779 "zcopy": true, 00:14:39.779 "get_zone_info": false, 00:14:39.779 "zone_management": false, 00:14:39.779 "zone_append": false, 00:14:39.779 "compare": false, 00:14:39.779 "compare_and_write": false, 00:14:39.779 "abort": true, 00:14:39.779 "seek_hole": false, 00:14:39.779 "seek_data": false, 00:14:39.779 "copy": true, 00:14:39.779 "nvme_iov_md": false 00:14:39.779 }, 00:14:39.779 "memory_domains": [ 00:14:39.779 { 00:14:39.779 "dma_device_id": "system", 00:14:39.779 "dma_device_type": 1 00:14:39.779 }, 00:14:39.779 { 00:14:39.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.779 "dma_device_type": 2 00:14:39.779 } 00:14:39.779 ], 00:14:39.779 "driver_specific": {} 00:14:39.779 } 00:14:39.779 ] 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.779 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.779 "name": "Existed_Raid", 00:14:39.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.779 "strip_size_kb": 64, 00:14:39.779 "state": "configuring", 00:14:39.779 "raid_level": "concat", 00:14:39.779 "superblock": false, 00:14:39.779 "num_base_bdevs": 4, 00:14:39.779 "num_base_bdevs_discovered": 2, 00:14:39.779 "num_base_bdevs_operational": 4, 00:14:39.779 "base_bdevs_list": [ 00:14:39.779 { 00:14:39.779 "name": "BaseBdev1", 00:14:39.779 "uuid": "67c3a0d0-fe90-49b4-bd21-7ab16740cb4f", 00:14:39.779 "is_configured": true, 00:14:39.779 "data_offset": 0, 00:14:39.779 "data_size": 65536 00:14:39.779 }, 00:14:39.779 { 00:14:39.779 "name": "BaseBdev2", 00:14:39.779 "uuid": "51d1b109-aee3-4096-8e77-90a36bb4fd5c", 00:14:39.779 "is_configured": true, 00:14:39.779 "data_offset": 0, 00:14:39.779 "data_size": 65536 00:14:39.779 }, 00:14:39.779 { 00:14:39.779 "name": "BaseBdev3", 00:14:39.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.779 "is_configured": false, 00:14:39.779 "data_offset": 0, 00:14:39.780 "data_size": 0 00:14:39.780 }, 00:14:39.780 { 00:14:39.780 "name": "BaseBdev4", 00:14:39.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.780 "is_configured": false, 00:14:39.780 "data_offset": 0, 00:14:39.780 "data_size": 0 00:14:39.780 } 00:14:39.780 ] 00:14:39.780 }' 00:14:39.780 07:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.780 07:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.348 [2024-11-20 07:11:37.507648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.348 BaseBdev3 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.348 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.348 [ 00:14:40.348 { 00:14:40.348 "name": "BaseBdev3", 00:14:40.349 "aliases": [ 00:14:40.349 "7fa5017f-c280-4d02-a335-be6c5178973d" 00:14:40.349 ], 00:14:40.349 "product_name": "Malloc disk", 00:14:40.349 "block_size": 512, 00:14:40.349 "num_blocks": 65536, 00:14:40.349 "uuid": "7fa5017f-c280-4d02-a335-be6c5178973d", 00:14:40.349 "assigned_rate_limits": { 00:14:40.349 "rw_ios_per_sec": 0, 00:14:40.349 "rw_mbytes_per_sec": 0, 00:14:40.349 "r_mbytes_per_sec": 0, 00:14:40.349 "w_mbytes_per_sec": 0 00:14:40.349 }, 00:14:40.349 "claimed": true, 00:14:40.349 "claim_type": "exclusive_write", 00:14:40.349 "zoned": false, 00:14:40.349 "supported_io_types": { 00:14:40.349 "read": true, 00:14:40.349 "write": true, 00:14:40.349 "unmap": true, 00:14:40.349 "flush": true, 00:14:40.349 "reset": true, 00:14:40.349 "nvme_admin": false, 00:14:40.349 "nvme_io": false, 00:14:40.349 "nvme_io_md": false, 00:14:40.349 "write_zeroes": true, 00:14:40.349 "zcopy": true, 00:14:40.349 "get_zone_info": false, 00:14:40.349 "zone_management": false, 00:14:40.349 "zone_append": false, 00:14:40.349 "compare": false, 00:14:40.349 "compare_and_write": false, 00:14:40.349 "abort": true, 00:14:40.349 "seek_hole": false, 00:14:40.349 "seek_data": false, 00:14:40.349 "copy": true, 00:14:40.349 "nvme_iov_md": false 00:14:40.349 }, 00:14:40.349 "memory_domains": [ 00:14:40.349 { 00:14:40.349 "dma_device_id": "system", 00:14:40.349 "dma_device_type": 1 00:14:40.349 }, 00:14:40.349 { 00:14:40.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.349 "dma_device_type": 2 00:14:40.349 } 00:14:40.349 ], 00:14:40.349 "driver_specific": {} 00:14:40.349 } 00:14:40.349 ] 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.349 "name": "Existed_Raid", 00:14:40.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.349 "strip_size_kb": 64, 00:14:40.349 "state": "configuring", 00:14:40.349 "raid_level": "concat", 00:14:40.349 "superblock": false, 00:14:40.349 "num_base_bdevs": 4, 00:14:40.349 "num_base_bdevs_discovered": 3, 00:14:40.349 "num_base_bdevs_operational": 4, 00:14:40.349 "base_bdevs_list": [ 00:14:40.349 { 00:14:40.349 "name": "BaseBdev1", 00:14:40.349 "uuid": "67c3a0d0-fe90-49b4-bd21-7ab16740cb4f", 00:14:40.349 "is_configured": true, 00:14:40.349 "data_offset": 0, 00:14:40.349 "data_size": 65536 00:14:40.349 }, 00:14:40.349 { 00:14:40.349 "name": "BaseBdev2", 00:14:40.349 "uuid": "51d1b109-aee3-4096-8e77-90a36bb4fd5c", 00:14:40.349 "is_configured": true, 00:14:40.349 "data_offset": 0, 00:14:40.349 "data_size": 65536 00:14:40.349 }, 00:14:40.349 { 00:14:40.349 "name": "BaseBdev3", 00:14:40.349 "uuid": "7fa5017f-c280-4d02-a335-be6c5178973d", 00:14:40.349 "is_configured": true, 00:14:40.349 "data_offset": 0, 00:14:40.349 "data_size": 65536 00:14:40.349 }, 00:14:40.349 { 00:14:40.349 "name": "BaseBdev4", 00:14:40.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.349 "is_configured": false, 00:14:40.349 "data_offset": 0, 00:14:40.349 "data_size": 0 00:14:40.349 } 00:14:40.349 ] 00:14:40.349 }' 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.349 07:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.917 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:40.917 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.917 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.917 [2024-11-20 07:11:38.082042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:40.917 [2024-11-20 07:11:38.082109] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:40.917 [2024-11-20 07:11:38.082123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:40.917 [2024-11-20 07:11:38.082494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:40.918 [2024-11-20 07:11:38.082727] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:40.918 [2024-11-20 07:11:38.082761] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:40.918 [2024-11-20 07:11:38.083088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.918 BaseBdev4 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.918 [ 00:14:40.918 { 00:14:40.918 "name": "BaseBdev4", 00:14:40.918 "aliases": [ 00:14:40.918 "c8c8a8e4-dda3-4920-a888-824b8690abd2" 00:14:40.918 ], 00:14:40.918 "product_name": "Malloc disk", 00:14:40.918 "block_size": 512, 00:14:40.918 "num_blocks": 65536, 00:14:40.918 "uuid": "c8c8a8e4-dda3-4920-a888-824b8690abd2", 00:14:40.918 "assigned_rate_limits": { 00:14:40.918 "rw_ios_per_sec": 0, 00:14:40.918 "rw_mbytes_per_sec": 0, 00:14:40.918 "r_mbytes_per_sec": 0, 00:14:40.918 "w_mbytes_per_sec": 0 00:14:40.918 }, 00:14:40.918 "claimed": true, 00:14:40.918 "claim_type": "exclusive_write", 00:14:40.918 "zoned": false, 00:14:40.918 "supported_io_types": { 00:14:40.918 "read": true, 00:14:40.918 "write": true, 00:14:40.918 "unmap": true, 00:14:40.918 "flush": true, 00:14:40.918 "reset": true, 00:14:40.918 "nvme_admin": false, 00:14:40.918 "nvme_io": false, 00:14:40.918 "nvme_io_md": false, 00:14:40.918 "write_zeroes": true, 00:14:40.918 "zcopy": true, 00:14:40.918 "get_zone_info": false, 00:14:40.918 "zone_management": false, 00:14:40.918 "zone_append": false, 00:14:40.918 "compare": false, 00:14:40.918 "compare_and_write": false, 00:14:40.918 "abort": true, 00:14:40.918 "seek_hole": false, 00:14:40.918 "seek_data": false, 00:14:40.918 "copy": true, 00:14:40.918 "nvme_iov_md": false 00:14:40.918 }, 00:14:40.918 "memory_domains": [ 00:14:40.918 { 00:14:40.918 "dma_device_id": "system", 00:14:40.918 "dma_device_type": 1 00:14:40.918 }, 00:14:40.918 { 00:14:40.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.918 "dma_device_type": 2 00:14:40.918 } 00:14:40.918 ], 00:14:40.918 "driver_specific": {} 00:14:40.918 } 00:14:40.918 ] 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.918 "name": "Existed_Raid", 00:14:40.918 "uuid": "a7e0a50e-d836-4ac6-bee2-79bcad437f0d", 00:14:40.918 "strip_size_kb": 64, 00:14:40.918 "state": "online", 00:14:40.918 "raid_level": "concat", 00:14:40.918 "superblock": false, 00:14:40.918 "num_base_bdevs": 4, 00:14:40.918 "num_base_bdevs_discovered": 4, 00:14:40.918 "num_base_bdevs_operational": 4, 00:14:40.918 "base_bdevs_list": [ 00:14:40.918 { 00:14:40.918 "name": "BaseBdev1", 00:14:40.918 "uuid": "67c3a0d0-fe90-49b4-bd21-7ab16740cb4f", 00:14:40.918 "is_configured": true, 00:14:40.918 "data_offset": 0, 00:14:40.918 "data_size": 65536 00:14:40.918 }, 00:14:40.918 { 00:14:40.918 "name": "BaseBdev2", 00:14:40.918 "uuid": "51d1b109-aee3-4096-8e77-90a36bb4fd5c", 00:14:40.918 "is_configured": true, 00:14:40.918 "data_offset": 0, 00:14:40.918 "data_size": 65536 00:14:40.918 }, 00:14:40.918 { 00:14:40.918 "name": "BaseBdev3", 00:14:40.918 "uuid": "7fa5017f-c280-4d02-a335-be6c5178973d", 00:14:40.918 "is_configured": true, 00:14:40.918 "data_offset": 0, 00:14:40.918 "data_size": 65536 00:14:40.918 }, 00:14:40.918 { 00:14:40.918 "name": "BaseBdev4", 00:14:40.918 "uuid": "c8c8a8e4-dda3-4920-a888-824b8690abd2", 00:14:40.918 "is_configured": true, 00:14:40.918 "data_offset": 0, 00:14:40.918 "data_size": 65536 00:14:40.918 } 00:14:40.918 ] 00:14:40.918 }' 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.918 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.485 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:41.485 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:41.485 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:41.485 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:41.485 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:41.485 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:41.485 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:41.485 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:41.485 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.485 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.485 [2024-11-20 07:11:38.622705] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.485 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.485 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:41.485 "name": "Existed_Raid", 00:14:41.485 "aliases": [ 00:14:41.485 "a7e0a50e-d836-4ac6-bee2-79bcad437f0d" 00:14:41.485 ], 00:14:41.485 "product_name": "Raid Volume", 00:14:41.485 "block_size": 512, 00:14:41.485 "num_blocks": 262144, 00:14:41.485 "uuid": "a7e0a50e-d836-4ac6-bee2-79bcad437f0d", 00:14:41.485 "assigned_rate_limits": { 00:14:41.485 "rw_ios_per_sec": 0, 00:14:41.485 "rw_mbytes_per_sec": 0, 00:14:41.485 "r_mbytes_per_sec": 0, 00:14:41.485 "w_mbytes_per_sec": 0 00:14:41.485 }, 00:14:41.485 "claimed": false, 00:14:41.485 "zoned": false, 00:14:41.485 "supported_io_types": { 00:14:41.485 "read": true, 00:14:41.485 "write": true, 00:14:41.485 "unmap": true, 00:14:41.485 "flush": true, 00:14:41.485 "reset": true, 00:14:41.485 "nvme_admin": false, 00:14:41.485 "nvme_io": false, 00:14:41.485 "nvme_io_md": false, 00:14:41.486 "write_zeroes": true, 00:14:41.486 "zcopy": false, 00:14:41.486 "get_zone_info": false, 00:14:41.486 "zone_management": false, 00:14:41.486 "zone_append": false, 00:14:41.486 "compare": false, 00:14:41.486 "compare_and_write": false, 00:14:41.486 "abort": false, 00:14:41.486 "seek_hole": false, 00:14:41.486 "seek_data": false, 00:14:41.486 "copy": false, 00:14:41.486 "nvme_iov_md": false 00:14:41.486 }, 00:14:41.486 "memory_domains": [ 00:14:41.486 { 00:14:41.486 "dma_device_id": "system", 00:14:41.486 "dma_device_type": 1 00:14:41.486 }, 00:14:41.486 { 00:14:41.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.486 "dma_device_type": 2 00:14:41.486 }, 00:14:41.486 { 00:14:41.486 "dma_device_id": "system", 00:14:41.486 "dma_device_type": 1 00:14:41.486 }, 00:14:41.486 { 00:14:41.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.486 "dma_device_type": 2 00:14:41.486 }, 00:14:41.486 { 00:14:41.486 "dma_device_id": "system", 00:14:41.486 "dma_device_type": 1 00:14:41.486 }, 00:14:41.486 { 00:14:41.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.486 "dma_device_type": 2 00:14:41.486 }, 00:14:41.486 { 00:14:41.486 "dma_device_id": "system", 00:14:41.486 "dma_device_type": 1 00:14:41.486 }, 00:14:41.486 { 00:14:41.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.486 "dma_device_type": 2 00:14:41.486 } 00:14:41.486 ], 00:14:41.486 "driver_specific": { 00:14:41.486 "raid": { 00:14:41.486 "uuid": "a7e0a50e-d836-4ac6-bee2-79bcad437f0d", 00:14:41.486 "strip_size_kb": 64, 00:14:41.486 "state": "online", 00:14:41.486 "raid_level": "concat", 00:14:41.486 "superblock": false, 00:14:41.486 "num_base_bdevs": 4, 00:14:41.486 "num_base_bdevs_discovered": 4, 00:14:41.486 "num_base_bdevs_operational": 4, 00:14:41.486 "base_bdevs_list": [ 00:14:41.486 { 00:14:41.486 "name": "BaseBdev1", 00:14:41.486 "uuid": "67c3a0d0-fe90-49b4-bd21-7ab16740cb4f", 00:14:41.486 "is_configured": true, 00:14:41.486 "data_offset": 0, 00:14:41.486 "data_size": 65536 00:14:41.486 }, 00:14:41.486 { 00:14:41.486 "name": "BaseBdev2", 00:14:41.486 "uuid": "51d1b109-aee3-4096-8e77-90a36bb4fd5c", 00:14:41.486 "is_configured": true, 00:14:41.486 "data_offset": 0, 00:14:41.486 "data_size": 65536 00:14:41.486 }, 00:14:41.486 { 00:14:41.486 "name": "BaseBdev3", 00:14:41.486 "uuid": "7fa5017f-c280-4d02-a335-be6c5178973d", 00:14:41.486 "is_configured": true, 00:14:41.486 "data_offset": 0, 00:14:41.486 "data_size": 65536 00:14:41.486 }, 00:14:41.486 { 00:14:41.486 "name": "BaseBdev4", 00:14:41.486 "uuid": "c8c8a8e4-dda3-4920-a888-824b8690abd2", 00:14:41.486 "is_configured": true, 00:14:41.486 "data_offset": 0, 00:14:41.486 "data_size": 65536 00:14:41.486 } 00:14:41.486 ] 00:14:41.486 } 00:14:41.486 } 00:14:41.486 }' 00:14:41.486 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:41.486 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:41.486 BaseBdev2 00:14:41.486 BaseBdev3 00:14:41.486 BaseBdev4' 00:14:41.486 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.486 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:41.486 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.486 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:41.486 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.486 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.486 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.486 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.745 07:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.745 [2024-11-20 07:11:39.002425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.745 [2024-11-20 07:11:39.002470] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.745 [2024-11-20 07:11:39.002538] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.004 "name": "Existed_Raid", 00:14:42.004 "uuid": "a7e0a50e-d836-4ac6-bee2-79bcad437f0d", 00:14:42.004 "strip_size_kb": 64, 00:14:42.004 "state": "offline", 00:14:42.004 "raid_level": "concat", 00:14:42.004 "superblock": false, 00:14:42.004 "num_base_bdevs": 4, 00:14:42.004 "num_base_bdevs_discovered": 3, 00:14:42.004 "num_base_bdevs_operational": 3, 00:14:42.004 "base_bdevs_list": [ 00:14:42.004 { 00:14:42.004 "name": null, 00:14:42.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.004 "is_configured": false, 00:14:42.004 "data_offset": 0, 00:14:42.004 "data_size": 65536 00:14:42.004 }, 00:14:42.004 { 00:14:42.004 "name": "BaseBdev2", 00:14:42.004 "uuid": "51d1b109-aee3-4096-8e77-90a36bb4fd5c", 00:14:42.004 "is_configured": true, 00:14:42.004 "data_offset": 0, 00:14:42.004 "data_size": 65536 00:14:42.004 }, 00:14:42.004 { 00:14:42.004 "name": "BaseBdev3", 00:14:42.004 "uuid": "7fa5017f-c280-4d02-a335-be6c5178973d", 00:14:42.004 "is_configured": true, 00:14:42.004 "data_offset": 0, 00:14:42.004 "data_size": 65536 00:14:42.004 }, 00:14:42.004 { 00:14:42.004 "name": "BaseBdev4", 00:14:42.004 "uuid": "c8c8a8e4-dda3-4920-a888-824b8690abd2", 00:14:42.004 "is_configured": true, 00:14:42.004 "data_offset": 0, 00:14:42.004 "data_size": 65536 00:14:42.004 } 00:14:42.004 ] 00:14:42.004 }' 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.004 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.299 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:42.299 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:42.299 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.299 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.299 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.559 [2024-11-20 07:11:39.675382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.559 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.559 [2024-11-20 07:11:39.820415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:42.819 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.819 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:42.819 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:42.819 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:42.819 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.819 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.819 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.819 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.819 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:42.819 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:42.819 07:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:42.819 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.819 07:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.819 [2024-11-20 07:11:39.969114] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:42.819 [2024-11-20 07:11:39.969182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.819 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.079 BaseBdev2 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.079 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.079 [ 00:14:43.079 { 00:14:43.079 "name": "BaseBdev2", 00:14:43.079 "aliases": [ 00:14:43.079 "cc88372f-ae80-4de5-8507-eb4d2e33838b" 00:14:43.079 ], 00:14:43.079 "product_name": "Malloc disk", 00:14:43.079 "block_size": 512, 00:14:43.079 "num_blocks": 65536, 00:14:43.079 "uuid": "cc88372f-ae80-4de5-8507-eb4d2e33838b", 00:14:43.079 "assigned_rate_limits": { 00:14:43.079 "rw_ios_per_sec": 0, 00:14:43.079 "rw_mbytes_per_sec": 0, 00:14:43.079 "r_mbytes_per_sec": 0, 00:14:43.079 "w_mbytes_per_sec": 0 00:14:43.079 }, 00:14:43.079 "claimed": false, 00:14:43.079 "zoned": false, 00:14:43.079 "supported_io_types": { 00:14:43.079 "read": true, 00:14:43.079 "write": true, 00:14:43.079 "unmap": true, 00:14:43.079 "flush": true, 00:14:43.079 "reset": true, 00:14:43.079 "nvme_admin": false, 00:14:43.079 "nvme_io": false, 00:14:43.079 "nvme_io_md": false, 00:14:43.079 "write_zeroes": true, 00:14:43.080 "zcopy": true, 00:14:43.080 "get_zone_info": false, 00:14:43.080 "zone_management": false, 00:14:43.080 "zone_append": false, 00:14:43.080 "compare": false, 00:14:43.080 "compare_and_write": false, 00:14:43.080 "abort": true, 00:14:43.080 "seek_hole": false, 00:14:43.080 "seek_data": false, 00:14:43.080 "copy": true, 00:14:43.080 "nvme_iov_md": false 00:14:43.080 }, 00:14:43.080 "memory_domains": [ 00:14:43.080 { 00:14:43.080 "dma_device_id": "system", 00:14:43.080 "dma_device_type": 1 00:14:43.080 }, 00:14:43.080 { 00:14:43.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.080 "dma_device_type": 2 00:14:43.080 } 00:14:43.080 ], 00:14:43.080 "driver_specific": {} 00:14:43.080 } 00:14:43.080 ] 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.080 BaseBdev3 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.080 [ 00:14:43.080 { 00:14:43.080 "name": "BaseBdev3", 00:14:43.080 "aliases": [ 00:14:43.080 "83bcf1f6-5f2e-4461-9497-69c831db8de2" 00:14:43.080 ], 00:14:43.080 "product_name": "Malloc disk", 00:14:43.080 "block_size": 512, 00:14:43.080 "num_blocks": 65536, 00:14:43.080 "uuid": "83bcf1f6-5f2e-4461-9497-69c831db8de2", 00:14:43.080 "assigned_rate_limits": { 00:14:43.080 "rw_ios_per_sec": 0, 00:14:43.080 "rw_mbytes_per_sec": 0, 00:14:43.080 "r_mbytes_per_sec": 0, 00:14:43.080 "w_mbytes_per_sec": 0 00:14:43.080 }, 00:14:43.080 "claimed": false, 00:14:43.080 "zoned": false, 00:14:43.080 "supported_io_types": { 00:14:43.080 "read": true, 00:14:43.080 "write": true, 00:14:43.080 "unmap": true, 00:14:43.080 "flush": true, 00:14:43.080 "reset": true, 00:14:43.080 "nvme_admin": false, 00:14:43.080 "nvme_io": false, 00:14:43.080 "nvme_io_md": false, 00:14:43.080 "write_zeroes": true, 00:14:43.080 "zcopy": true, 00:14:43.080 "get_zone_info": false, 00:14:43.080 "zone_management": false, 00:14:43.080 "zone_append": false, 00:14:43.080 "compare": false, 00:14:43.080 "compare_and_write": false, 00:14:43.080 "abort": true, 00:14:43.080 "seek_hole": false, 00:14:43.080 "seek_data": false, 00:14:43.080 "copy": true, 00:14:43.080 "nvme_iov_md": false 00:14:43.080 }, 00:14:43.080 "memory_domains": [ 00:14:43.080 { 00:14:43.080 "dma_device_id": "system", 00:14:43.080 "dma_device_type": 1 00:14:43.080 }, 00:14:43.080 { 00:14:43.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.080 "dma_device_type": 2 00:14:43.080 } 00:14:43.080 ], 00:14:43.080 "driver_specific": {} 00:14:43.080 } 00:14:43.080 ] 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.080 BaseBdev4 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.080 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.080 [ 00:14:43.080 { 00:14:43.080 "name": "BaseBdev4", 00:14:43.080 "aliases": [ 00:14:43.080 "802c989a-2c7f-46ed-a300-0dda58c23b51" 00:14:43.080 ], 00:14:43.080 "product_name": "Malloc disk", 00:14:43.080 "block_size": 512, 00:14:43.080 "num_blocks": 65536, 00:14:43.080 "uuid": "802c989a-2c7f-46ed-a300-0dda58c23b51", 00:14:43.080 "assigned_rate_limits": { 00:14:43.080 "rw_ios_per_sec": 0, 00:14:43.080 "rw_mbytes_per_sec": 0, 00:14:43.080 "r_mbytes_per_sec": 0, 00:14:43.080 "w_mbytes_per_sec": 0 00:14:43.080 }, 00:14:43.080 "claimed": false, 00:14:43.080 "zoned": false, 00:14:43.080 "supported_io_types": { 00:14:43.080 "read": true, 00:14:43.080 "write": true, 00:14:43.080 "unmap": true, 00:14:43.080 "flush": true, 00:14:43.080 "reset": true, 00:14:43.080 "nvme_admin": false, 00:14:43.080 "nvme_io": false, 00:14:43.080 "nvme_io_md": false, 00:14:43.080 "write_zeroes": true, 00:14:43.080 "zcopy": true, 00:14:43.080 "get_zone_info": false, 00:14:43.080 "zone_management": false, 00:14:43.080 "zone_append": false, 00:14:43.080 "compare": false, 00:14:43.081 "compare_and_write": false, 00:14:43.081 "abort": true, 00:14:43.081 "seek_hole": false, 00:14:43.081 "seek_data": false, 00:14:43.081 "copy": true, 00:14:43.081 "nvme_iov_md": false 00:14:43.081 }, 00:14:43.081 "memory_domains": [ 00:14:43.081 { 00:14:43.081 "dma_device_id": "system", 00:14:43.081 "dma_device_type": 1 00:14:43.081 }, 00:14:43.081 { 00:14:43.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.081 "dma_device_type": 2 00:14:43.081 } 00:14:43.081 ], 00:14:43.081 "driver_specific": {} 00:14:43.081 } 00:14:43.081 ] 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.081 [2024-11-20 07:11:40.319178] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:43.081 [2024-11-20 07:11:40.319233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:43.081 [2024-11-20 07:11:40.319265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:43.081 [2024-11-20 07:11:40.321680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:43.081 [2024-11-20 07:11:40.321759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.081 "name": "Existed_Raid", 00:14:43.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.081 "strip_size_kb": 64, 00:14:43.081 "state": "configuring", 00:14:43.081 "raid_level": "concat", 00:14:43.081 "superblock": false, 00:14:43.081 "num_base_bdevs": 4, 00:14:43.081 "num_base_bdevs_discovered": 3, 00:14:43.081 "num_base_bdevs_operational": 4, 00:14:43.081 "base_bdevs_list": [ 00:14:43.081 { 00:14:43.081 "name": "BaseBdev1", 00:14:43.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.081 "is_configured": false, 00:14:43.081 "data_offset": 0, 00:14:43.081 "data_size": 0 00:14:43.081 }, 00:14:43.081 { 00:14:43.081 "name": "BaseBdev2", 00:14:43.081 "uuid": "cc88372f-ae80-4de5-8507-eb4d2e33838b", 00:14:43.081 "is_configured": true, 00:14:43.081 "data_offset": 0, 00:14:43.081 "data_size": 65536 00:14:43.081 }, 00:14:43.081 { 00:14:43.081 "name": "BaseBdev3", 00:14:43.081 "uuid": "83bcf1f6-5f2e-4461-9497-69c831db8de2", 00:14:43.081 "is_configured": true, 00:14:43.081 "data_offset": 0, 00:14:43.081 "data_size": 65536 00:14:43.081 }, 00:14:43.081 { 00:14:43.081 "name": "BaseBdev4", 00:14:43.081 "uuid": "802c989a-2c7f-46ed-a300-0dda58c23b51", 00:14:43.081 "is_configured": true, 00:14:43.081 "data_offset": 0, 00:14:43.081 "data_size": 65536 00:14:43.081 } 00:14:43.081 ] 00:14:43.081 }' 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.081 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.649 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:43.649 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.649 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.649 [2024-11-20 07:11:40.839336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.650 "name": "Existed_Raid", 00:14:43.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.650 "strip_size_kb": 64, 00:14:43.650 "state": "configuring", 00:14:43.650 "raid_level": "concat", 00:14:43.650 "superblock": false, 00:14:43.650 "num_base_bdevs": 4, 00:14:43.650 "num_base_bdevs_discovered": 2, 00:14:43.650 "num_base_bdevs_operational": 4, 00:14:43.650 "base_bdevs_list": [ 00:14:43.650 { 00:14:43.650 "name": "BaseBdev1", 00:14:43.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.650 "is_configured": false, 00:14:43.650 "data_offset": 0, 00:14:43.650 "data_size": 0 00:14:43.650 }, 00:14:43.650 { 00:14:43.650 "name": null, 00:14:43.650 "uuid": "cc88372f-ae80-4de5-8507-eb4d2e33838b", 00:14:43.650 "is_configured": false, 00:14:43.650 "data_offset": 0, 00:14:43.650 "data_size": 65536 00:14:43.650 }, 00:14:43.650 { 00:14:43.650 "name": "BaseBdev3", 00:14:43.650 "uuid": "83bcf1f6-5f2e-4461-9497-69c831db8de2", 00:14:43.650 "is_configured": true, 00:14:43.650 "data_offset": 0, 00:14:43.650 "data_size": 65536 00:14:43.650 }, 00:14:43.650 { 00:14:43.650 "name": "BaseBdev4", 00:14:43.650 "uuid": "802c989a-2c7f-46ed-a300-0dda58c23b51", 00:14:43.650 "is_configured": true, 00:14:43.650 "data_offset": 0, 00:14:43.650 "data_size": 65536 00:14:43.650 } 00:14:43.650 ] 00:14:43.650 }' 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.650 07:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.218 [2024-11-20 07:11:41.453371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.218 BaseBdev1 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.218 [ 00:14:44.218 { 00:14:44.218 "name": "BaseBdev1", 00:14:44.218 "aliases": [ 00:14:44.218 "d32eb77d-cd6e-49da-9b9b-504d2318c027" 00:14:44.218 ], 00:14:44.218 "product_name": "Malloc disk", 00:14:44.218 "block_size": 512, 00:14:44.218 "num_blocks": 65536, 00:14:44.218 "uuid": "d32eb77d-cd6e-49da-9b9b-504d2318c027", 00:14:44.218 "assigned_rate_limits": { 00:14:44.218 "rw_ios_per_sec": 0, 00:14:44.218 "rw_mbytes_per_sec": 0, 00:14:44.218 "r_mbytes_per_sec": 0, 00:14:44.218 "w_mbytes_per_sec": 0 00:14:44.218 }, 00:14:44.218 "claimed": true, 00:14:44.218 "claim_type": "exclusive_write", 00:14:44.218 "zoned": false, 00:14:44.218 "supported_io_types": { 00:14:44.218 "read": true, 00:14:44.218 "write": true, 00:14:44.218 "unmap": true, 00:14:44.218 "flush": true, 00:14:44.218 "reset": true, 00:14:44.218 "nvme_admin": false, 00:14:44.218 "nvme_io": false, 00:14:44.218 "nvme_io_md": false, 00:14:44.218 "write_zeroes": true, 00:14:44.218 "zcopy": true, 00:14:44.218 "get_zone_info": false, 00:14:44.218 "zone_management": false, 00:14:44.218 "zone_append": false, 00:14:44.218 "compare": false, 00:14:44.218 "compare_and_write": false, 00:14:44.218 "abort": true, 00:14:44.218 "seek_hole": false, 00:14:44.218 "seek_data": false, 00:14:44.218 "copy": true, 00:14:44.218 "nvme_iov_md": false 00:14:44.218 }, 00:14:44.218 "memory_domains": [ 00:14:44.218 { 00:14:44.218 "dma_device_id": "system", 00:14:44.218 "dma_device_type": 1 00:14:44.218 }, 00:14:44.218 { 00:14:44.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.218 "dma_device_type": 2 00:14:44.218 } 00:14:44.218 ], 00:14:44.218 "driver_specific": {} 00:14:44.218 } 00:14:44.218 ] 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.218 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.218 "name": "Existed_Raid", 00:14:44.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.218 "strip_size_kb": 64, 00:14:44.218 "state": "configuring", 00:14:44.218 "raid_level": "concat", 00:14:44.218 "superblock": false, 00:14:44.218 "num_base_bdevs": 4, 00:14:44.218 "num_base_bdevs_discovered": 3, 00:14:44.218 "num_base_bdevs_operational": 4, 00:14:44.218 "base_bdevs_list": [ 00:14:44.218 { 00:14:44.218 "name": "BaseBdev1", 00:14:44.219 "uuid": "d32eb77d-cd6e-49da-9b9b-504d2318c027", 00:14:44.219 "is_configured": true, 00:14:44.219 "data_offset": 0, 00:14:44.219 "data_size": 65536 00:14:44.219 }, 00:14:44.219 { 00:14:44.219 "name": null, 00:14:44.219 "uuid": "cc88372f-ae80-4de5-8507-eb4d2e33838b", 00:14:44.219 "is_configured": false, 00:14:44.219 "data_offset": 0, 00:14:44.219 "data_size": 65536 00:14:44.219 }, 00:14:44.219 { 00:14:44.219 "name": "BaseBdev3", 00:14:44.219 "uuid": "83bcf1f6-5f2e-4461-9497-69c831db8de2", 00:14:44.219 "is_configured": true, 00:14:44.219 "data_offset": 0, 00:14:44.219 "data_size": 65536 00:14:44.219 }, 00:14:44.219 { 00:14:44.219 "name": "BaseBdev4", 00:14:44.219 "uuid": "802c989a-2c7f-46ed-a300-0dda58c23b51", 00:14:44.219 "is_configured": true, 00:14:44.219 "data_offset": 0, 00:14:44.219 "data_size": 65536 00:14:44.219 } 00:14:44.219 ] 00:14:44.219 }' 00:14:44.219 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.219 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.787 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.787 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.787 07:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.787 07:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.787 [2024-11-20 07:11:42.053604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.787 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.046 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.046 "name": "Existed_Raid", 00:14:45.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.046 "strip_size_kb": 64, 00:14:45.046 "state": "configuring", 00:14:45.046 "raid_level": "concat", 00:14:45.046 "superblock": false, 00:14:45.046 "num_base_bdevs": 4, 00:14:45.046 "num_base_bdevs_discovered": 2, 00:14:45.046 "num_base_bdevs_operational": 4, 00:14:45.046 "base_bdevs_list": [ 00:14:45.046 { 00:14:45.046 "name": "BaseBdev1", 00:14:45.046 "uuid": "d32eb77d-cd6e-49da-9b9b-504d2318c027", 00:14:45.046 "is_configured": true, 00:14:45.046 "data_offset": 0, 00:14:45.046 "data_size": 65536 00:14:45.046 }, 00:14:45.046 { 00:14:45.046 "name": null, 00:14:45.046 "uuid": "cc88372f-ae80-4de5-8507-eb4d2e33838b", 00:14:45.046 "is_configured": false, 00:14:45.046 "data_offset": 0, 00:14:45.046 "data_size": 65536 00:14:45.046 }, 00:14:45.046 { 00:14:45.046 "name": null, 00:14:45.046 "uuid": "83bcf1f6-5f2e-4461-9497-69c831db8de2", 00:14:45.046 "is_configured": false, 00:14:45.046 "data_offset": 0, 00:14:45.046 "data_size": 65536 00:14:45.046 }, 00:14:45.046 { 00:14:45.046 "name": "BaseBdev4", 00:14:45.046 "uuid": "802c989a-2c7f-46ed-a300-0dda58c23b51", 00:14:45.046 "is_configured": true, 00:14:45.046 "data_offset": 0, 00:14:45.046 "data_size": 65536 00:14:45.046 } 00:14:45.046 ] 00:14:45.046 }' 00:14:45.046 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.046 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.305 [2024-11-20 07:11:42.589722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.305 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.564 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.564 "name": "Existed_Raid", 00:14:45.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.564 "strip_size_kb": 64, 00:14:45.564 "state": "configuring", 00:14:45.564 "raid_level": "concat", 00:14:45.564 "superblock": false, 00:14:45.564 "num_base_bdevs": 4, 00:14:45.564 "num_base_bdevs_discovered": 3, 00:14:45.564 "num_base_bdevs_operational": 4, 00:14:45.564 "base_bdevs_list": [ 00:14:45.564 { 00:14:45.564 "name": "BaseBdev1", 00:14:45.564 "uuid": "d32eb77d-cd6e-49da-9b9b-504d2318c027", 00:14:45.564 "is_configured": true, 00:14:45.564 "data_offset": 0, 00:14:45.564 "data_size": 65536 00:14:45.564 }, 00:14:45.564 { 00:14:45.564 "name": null, 00:14:45.564 "uuid": "cc88372f-ae80-4de5-8507-eb4d2e33838b", 00:14:45.564 "is_configured": false, 00:14:45.564 "data_offset": 0, 00:14:45.564 "data_size": 65536 00:14:45.564 }, 00:14:45.564 { 00:14:45.564 "name": "BaseBdev3", 00:14:45.564 "uuid": "83bcf1f6-5f2e-4461-9497-69c831db8de2", 00:14:45.564 "is_configured": true, 00:14:45.564 "data_offset": 0, 00:14:45.564 "data_size": 65536 00:14:45.564 }, 00:14:45.564 { 00:14:45.564 "name": "BaseBdev4", 00:14:45.564 "uuid": "802c989a-2c7f-46ed-a300-0dda58c23b51", 00:14:45.564 "is_configured": true, 00:14:45.564 "data_offset": 0, 00:14:45.564 "data_size": 65536 00:14:45.564 } 00:14:45.564 ] 00:14:45.564 }' 00:14:45.564 07:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.564 07:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.131 [2024-11-20 07:11:43.213975] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.131 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.132 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.132 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.132 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.132 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.132 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.132 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.132 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.132 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.132 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.132 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.132 "name": "Existed_Raid", 00:14:46.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.132 "strip_size_kb": 64, 00:14:46.132 "state": "configuring", 00:14:46.132 "raid_level": "concat", 00:14:46.132 "superblock": false, 00:14:46.132 "num_base_bdevs": 4, 00:14:46.132 "num_base_bdevs_discovered": 2, 00:14:46.132 "num_base_bdevs_operational": 4, 00:14:46.132 "base_bdevs_list": [ 00:14:46.132 { 00:14:46.132 "name": null, 00:14:46.132 "uuid": "d32eb77d-cd6e-49da-9b9b-504d2318c027", 00:14:46.132 "is_configured": false, 00:14:46.132 "data_offset": 0, 00:14:46.132 "data_size": 65536 00:14:46.132 }, 00:14:46.132 { 00:14:46.132 "name": null, 00:14:46.132 "uuid": "cc88372f-ae80-4de5-8507-eb4d2e33838b", 00:14:46.132 "is_configured": false, 00:14:46.132 "data_offset": 0, 00:14:46.132 "data_size": 65536 00:14:46.132 }, 00:14:46.132 { 00:14:46.132 "name": "BaseBdev3", 00:14:46.132 "uuid": "83bcf1f6-5f2e-4461-9497-69c831db8de2", 00:14:46.132 "is_configured": true, 00:14:46.132 "data_offset": 0, 00:14:46.132 "data_size": 65536 00:14:46.132 }, 00:14:46.132 { 00:14:46.132 "name": "BaseBdev4", 00:14:46.132 "uuid": "802c989a-2c7f-46ed-a300-0dda58c23b51", 00:14:46.132 "is_configured": true, 00:14:46.132 "data_offset": 0, 00:14:46.132 "data_size": 65536 00:14:46.132 } 00:14:46.132 ] 00:14:46.132 }' 00:14:46.132 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.132 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.698 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:46.698 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.698 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.698 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.699 [2024-11-20 07:11:43.886418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.699 "name": "Existed_Raid", 00:14:46.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.699 "strip_size_kb": 64, 00:14:46.699 "state": "configuring", 00:14:46.699 "raid_level": "concat", 00:14:46.699 "superblock": false, 00:14:46.699 "num_base_bdevs": 4, 00:14:46.699 "num_base_bdevs_discovered": 3, 00:14:46.699 "num_base_bdevs_operational": 4, 00:14:46.699 "base_bdevs_list": [ 00:14:46.699 { 00:14:46.699 "name": null, 00:14:46.699 "uuid": "d32eb77d-cd6e-49da-9b9b-504d2318c027", 00:14:46.699 "is_configured": false, 00:14:46.699 "data_offset": 0, 00:14:46.699 "data_size": 65536 00:14:46.699 }, 00:14:46.699 { 00:14:46.699 "name": "BaseBdev2", 00:14:46.699 "uuid": "cc88372f-ae80-4de5-8507-eb4d2e33838b", 00:14:46.699 "is_configured": true, 00:14:46.699 "data_offset": 0, 00:14:46.699 "data_size": 65536 00:14:46.699 }, 00:14:46.699 { 00:14:46.699 "name": "BaseBdev3", 00:14:46.699 "uuid": "83bcf1f6-5f2e-4461-9497-69c831db8de2", 00:14:46.699 "is_configured": true, 00:14:46.699 "data_offset": 0, 00:14:46.699 "data_size": 65536 00:14:46.699 }, 00:14:46.699 { 00:14:46.699 "name": "BaseBdev4", 00:14:46.699 "uuid": "802c989a-2c7f-46ed-a300-0dda58c23b51", 00:14:46.699 "is_configured": true, 00:14:46.699 "data_offset": 0, 00:14:46.699 "data_size": 65536 00:14:46.699 } 00:14:46.699 ] 00:14:46.699 }' 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.699 07:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d32eb77d-cd6e-49da-9b9b-504d2318c027 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.266 [2024-11-20 07:11:44.552373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:47.266 [2024-11-20 07:11:44.552710] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:47.266 [2024-11-20 07:11:44.552734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:47.266 [2024-11-20 07:11:44.553101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:47.266 [2024-11-20 07:11:44.553291] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:47.266 [2024-11-20 07:11:44.553312] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:47.266 [2024-11-20 07:11:44.553622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.266 NewBaseBdev 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:47.266 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.267 [ 00:14:47.267 { 00:14:47.267 "name": "NewBaseBdev", 00:14:47.267 "aliases": [ 00:14:47.267 "d32eb77d-cd6e-49da-9b9b-504d2318c027" 00:14:47.267 ], 00:14:47.267 "product_name": "Malloc disk", 00:14:47.267 "block_size": 512, 00:14:47.267 "num_blocks": 65536, 00:14:47.267 "uuid": "d32eb77d-cd6e-49da-9b9b-504d2318c027", 00:14:47.267 "assigned_rate_limits": { 00:14:47.267 "rw_ios_per_sec": 0, 00:14:47.267 "rw_mbytes_per_sec": 0, 00:14:47.267 "r_mbytes_per_sec": 0, 00:14:47.267 "w_mbytes_per_sec": 0 00:14:47.267 }, 00:14:47.267 "claimed": true, 00:14:47.267 "claim_type": "exclusive_write", 00:14:47.267 "zoned": false, 00:14:47.267 "supported_io_types": { 00:14:47.267 "read": true, 00:14:47.267 "write": true, 00:14:47.267 "unmap": true, 00:14:47.267 "flush": true, 00:14:47.267 "reset": true, 00:14:47.267 "nvme_admin": false, 00:14:47.267 "nvme_io": false, 00:14:47.267 "nvme_io_md": false, 00:14:47.267 "write_zeroes": true, 00:14:47.267 "zcopy": true, 00:14:47.267 "get_zone_info": false, 00:14:47.267 "zone_management": false, 00:14:47.267 "zone_append": false, 00:14:47.267 "compare": false, 00:14:47.267 "compare_and_write": false, 00:14:47.267 "abort": true, 00:14:47.267 "seek_hole": false, 00:14:47.267 "seek_data": false, 00:14:47.267 "copy": true, 00:14:47.267 "nvme_iov_md": false 00:14:47.267 }, 00:14:47.267 "memory_domains": [ 00:14:47.267 { 00:14:47.267 "dma_device_id": "system", 00:14:47.267 "dma_device_type": 1 00:14:47.267 }, 00:14:47.267 { 00:14:47.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.267 "dma_device_type": 2 00:14:47.267 } 00:14:47.267 ], 00:14:47.267 "driver_specific": {} 00:14:47.267 } 00:14:47.267 ] 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.267 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.525 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.525 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.525 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.525 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.525 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.525 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.525 "name": "Existed_Raid", 00:14:47.525 "uuid": "59f5aa62-12d0-43a2-b180-438f9de1d178", 00:14:47.525 "strip_size_kb": 64, 00:14:47.525 "state": "online", 00:14:47.525 "raid_level": "concat", 00:14:47.525 "superblock": false, 00:14:47.525 "num_base_bdevs": 4, 00:14:47.525 "num_base_bdevs_discovered": 4, 00:14:47.525 "num_base_bdevs_operational": 4, 00:14:47.525 "base_bdevs_list": [ 00:14:47.525 { 00:14:47.525 "name": "NewBaseBdev", 00:14:47.525 "uuid": "d32eb77d-cd6e-49da-9b9b-504d2318c027", 00:14:47.525 "is_configured": true, 00:14:47.525 "data_offset": 0, 00:14:47.525 "data_size": 65536 00:14:47.525 }, 00:14:47.525 { 00:14:47.525 "name": "BaseBdev2", 00:14:47.525 "uuid": "cc88372f-ae80-4de5-8507-eb4d2e33838b", 00:14:47.525 "is_configured": true, 00:14:47.525 "data_offset": 0, 00:14:47.526 "data_size": 65536 00:14:47.526 }, 00:14:47.526 { 00:14:47.526 "name": "BaseBdev3", 00:14:47.526 "uuid": "83bcf1f6-5f2e-4461-9497-69c831db8de2", 00:14:47.526 "is_configured": true, 00:14:47.526 "data_offset": 0, 00:14:47.526 "data_size": 65536 00:14:47.526 }, 00:14:47.526 { 00:14:47.526 "name": "BaseBdev4", 00:14:47.526 "uuid": "802c989a-2c7f-46ed-a300-0dda58c23b51", 00:14:47.526 "is_configured": true, 00:14:47.526 "data_offset": 0, 00:14:47.526 "data_size": 65536 00:14:47.526 } 00:14:47.526 ] 00:14:47.526 }' 00:14:47.526 07:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.526 07:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.093 [2024-11-20 07:11:45.113036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:48.093 "name": "Existed_Raid", 00:14:48.093 "aliases": [ 00:14:48.093 "59f5aa62-12d0-43a2-b180-438f9de1d178" 00:14:48.093 ], 00:14:48.093 "product_name": "Raid Volume", 00:14:48.093 "block_size": 512, 00:14:48.093 "num_blocks": 262144, 00:14:48.093 "uuid": "59f5aa62-12d0-43a2-b180-438f9de1d178", 00:14:48.093 "assigned_rate_limits": { 00:14:48.093 "rw_ios_per_sec": 0, 00:14:48.093 "rw_mbytes_per_sec": 0, 00:14:48.093 "r_mbytes_per_sec": 0, 00:14:48.093 "w_mbytes_per_sec": 0 00:14:48.093 }, 00:14:48.093 "claimed": false, 00:14:48.093 "zoned": false, 00:14:48.093 "supported_io_types": { 00:14:48.093 "read": true, 00:14:48.093 "write": true, 00:14:48.093 "unmap": true, 00:14:48.093 "flush": true, 00:14:48.093 "reset": true, 00:14:48.093 "nvme_admin": false, 00:14:48.093 "nvme_io": false, 00:14:48.093 "nvme_io_md": false, 00:14:48.093 "write_zeroes": true, 00:14:48.093 "zcopy": false, 00:14:48.093 "get_zone_info": false, 00:14:48.093 "zone_management": false, 00:14:48.093 "zone_append": false, 00:14:48.093 "compare": false, 00:14:48.093 "compare_and_write": false, 00:14:48.093 "abort": false, 00:14:48.093 "seek_hole": false, 00:14:48.093 "seek_data": false, 00:14:48.093 "copy": false, 00:14:48.093 "nvme_iov_md": false 00:14:48.093 }, 00:14:48.093 "memory_domains": [ 00:14:48.093 { 00:14:48.093 "dma_device_id": "system", 00:14:48.093 "dma_device_type": 1 00:14:48.093 }, 00:14:48.093 { 00:14:48.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.093 "dma_device_type": 2 00:14:48.093 }, 00:14:48.093 { 00:14:48.093 "dma_device_id": "system", 00:14:48.093 "dma_device_type": 1 00:14:48.093 }, 00:14:48.093 { 00:14:48.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.093 "dma_device_type": 2 00:14:48.093 }, 00:14:48.093 { 00:14:48.093 "dma_device_id": "system", 00:14:48.093 "dma_device_type": 1 00:14:48.093 }, 00:14:48.093 { 00:14:48.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.093 "dma_device_type": 2 00:14:48.093 }, 00:14:48.093 { 00:14:48.093 "dma_device_id": "system", 00:14:48.093 "dma_device_type": 1 00:14:48.093 }, 00:14:48.093 { 00:14:48.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.093 "dma_device_type": 2 00:14:48.093 } 00:14:48.093 ], 00:14:48.093 "driver_specific": { 00:14:48.093 "raid": { 00:14:48.093 "uuid": "59f5aa62-12d0-43a2-b180-438f9de1d178", 00:14:48.093 "strip_size_kb": 64, 00:14:48.093 "state": "online", 00:14:48.093 "raid_level": "concat", 00:14:48.093 "superblock": false, 00:14:48.093 "num_base_bdevs": 4, 00:14:48.093 "num_base_bdevs_discovered": 4, 00:14:48.093 "num_base_bdevs_operational": 4, 00:14:48.093 "base_bdevs_list": [ 00:14:48.093 { 00:14:48.093 "name": "NewBaseBdev", 00:14:48.093 "uuid": "d32eb77d-cd6e-49da-9b9b-504d2318c027", 00:14:48.093 "is_configured": true, 00:14:48.093 "data_offset": 0, 00:14:48.093 "data_size": 65536 00:14:48.093 }, 00:14:48.093 { 00:14:48.093 "name": "BaseBdev2", 00:14:48.093 "uuid": "cc88372f-ae80-4de5-8507-eb4d2e33838b", 00:14:48.093 "is_configured": true, 00:14:48.093 "data_offset": 0, 00:14:48.093 "data_size": 65536 00:14:48.093 }, 00:14:48.093 { 00:14:48.093 "name": "BaseBdev3", 00:14:48.093 "uuid": "83bcf1f6-5f2e-4461-9497-69c831db8de2", 00:14:48.093 "is_configured": true, 00:14:48.093 "data_offset": 0, 00:14:48.093 "data_size": 65536 00:14:48.093 }, 00:14:48.093 { 00:14:48.093 "name": "BaseBdev4", 00:14:48.093 "uuid": "802c989a-2c7f-46ed-a300-0dda58c23b51", 00:14:48.093 "is_configured": true, 00:14:48.093 "data_offset": 0, 00:14:48.093 "data_size": 65536 00:14:48.093 } 00:14:48.093 ] 00:14:48.093 } 00:14:48.093 } 00:14:48.093 }' 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:48.093 BaseBdev2 00:14:48.093 BaseBdev3 00:14:48.093 BaseBdev4' 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.093 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.352 [2024-11-20 07:11:45.488678] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.352 [2024-11-20 07:11:45.488721] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.352 [2024-11-20 07:11:45.488821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.352 [2024-11-20 07:11:45.488936] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.352 [2024-11-20 07:11:45.488955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71314 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71314 ']' 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71314 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71314 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.352 killing process with pid 71314 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71314' 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71314 00:14:48.352 [2024-11-20 07:11:45.526590] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.352 07:11:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71314 00:14:48.611 [2024-11-20 07:11:45.880489] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:49.988 00:14:49.988 real 0m12.840s 00:14:49.988 user 0m21.465s 00:14:49.988 sys 0m1.710s 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.988 ************************************ 00:14:49.988 END TEST raid_state_function_test 00:14:49.988 ************************************ 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.988 07:11:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:49.988 07:11:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:49.988 07:11:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.988 07:11:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:49.988 ************************************ 00:14:49.988 START TEST raid_state_function_test_sb 00:14:49.988 ************************************ 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71997 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71997' 00:14:49.988 Process raid pid: 71997 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71997 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71997 ']' 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:49.988 07:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.988 [2024-11-20 07:11:47.079880] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:14:49.989 [2024-11-20 07:11:47.080917] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.989 [2024-11-20 07:11:47.270225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.246 [2024-11-20 07:11:47.398248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.503 [2024-11-20 07:11:47.604706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.503 [2024-11-20 07:11:47.604924] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.072 [2024-11-20 07:11:48.101523] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:51.072 [2024-11-20 07:11:48.101587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:51.072 [2024-11-20 07:11:48.101604] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.072 [2024-11-20 07:11:48.101620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.072 [2024-11-20 07:11:48.101630] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.072 [2024-11-20 07:11:48.101644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.072 [2024-11-20 07:11:48.101653] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:51.072 [2024-11-20 07:11:48.101667] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.072 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.072 "name": "Existed_Raid", 00:14:51.072 "uuid": "e8736c64-ccb9-463d-8ba2-1046ccbaf738", 00:14:51.072 "strip_size_kb": 64, 00:14:51.072 "state": "configuring", 00:14:51.072 "raid_level": "concat", 00:14:51.072 "superblock": true, 00:14:51.072 "num_base_bdevs": 4, 00:14:51.072 "num_base_bdevs_discovered": 0, 00:14:51.072 "num_base_bdevs_operational": 4, 00:14:51.072 "base_bdevs_list": [ 00:14:51.072 { 00:14:51.072 "name": "BaseBdev1", 00:14:51.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.072 "is_configured": false, 00:14:51.072 "data_offset": 0, 00:14:51.072 "data_size": 0 00:14:51.072 }, 00:14:51.072 { 00:14:51.072 "name": "BaseBdev2", 00:14:51.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.072 "is_configured": false, 00:14:51.072 "data_offset": 0, 00:14:51.072 "data_size": 0 00:14:51.072 }, 00:14:51.072 { 00:14:51.072 "name": "BaseBdev3", 00:14:51.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.072 "is_configured": false, 00:14:51.072 "data_offset": 0, 00:14:51.072 "data_size": 0 00:14:51.072 }, 00:14:51.072 { 00:14:51.073 "name": "BaseBdev4", 00:14:51.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.073 "is_configured": false, 00:14:51.073 "data_offset": 0, 00:14:51.073 "data_size": 0 00:14:51.073 } 00:14:51.073 ] 00:14:51.073 }' 00:14:51.073 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.073 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.331 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:51.331 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.331 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.331 [2024-11-20 07:11:48.625580] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.331 [2024-11-20 07:11:48.625626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:51.331 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.331 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:51.331 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.331 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.331 [2024-11-20 07:11:48.633589] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:51.331 [2024-11-20 07:11:48.633780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:51.331 [2024-11-20 07:11:48.633912] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:51.331 [2024-11-20 07:11:48.633976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:51.331 [2024-11-20 07:11:48.634197] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:51.331 [2024-11-20 07:11:48.634241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:51.331 [2024-11-20 07:11:48.634254] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:51.331 [2024-11-20 07:11:48.634269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:51.331 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.331 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:51.331 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.331 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.591 [2024-11-20 07:11:48.678698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.591 BaseBdev1 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.591 [ 00:14:51.591 { 00:14:51.591 "name": "BaseBdev1", 00:14:51.591 "aliases": [ 00:14:51.591 "2cdecf5e-2d99-46ca-a1ce-edbc8ab6cf72" 00:14:51.591 ], 00:14:51.591 "product_name": "Malloc disk", 00:14:51.591 "block_size": 512, 00:14:51.591 "num_blocks": 65536, 00:14:51.591 "uuid": "2cdecf5e-2d99-46ca-a1ce-edbc8ab6cf72", 00:14:51.591 "assigned_rate_limits": { 00:14:51.591 "rw_ios_per_sec": 0, 00:14:51.591 "rw_mbytes_per_sec": 0, 00:14:51.591 "r_mbytes_per_sec": 0, 00:14:51.591 "w_mbytes_per_sec": 0 00:14:51.591 }, 00:14:51.591 "claimed": true, 00:14:51.591 "claim_type": "exclusive_write", 00:14:51.591 "zoned": false, 00:14:51.591 "supported_io_types": { 00:14:51.591 "read": true, 00:14:51.591 "write": true, 00:14:51.591 "unmap": true, 00:14:51.591 "flush": true, 00:14:51.591 "reset": true, 00:14:51.591 "nvme_admin": false, 00:14:51.591 "nvme_io": false, 00:14:51.591 "nvme_io_md": false, 00:14:51.591 "write_zeroes": true, 00:14:51.591 "zcopy": true, 00:14:51.591 "get_zone_info": false, 00:14:51.591 "zone_management": false, 00:14:51.591 "zone_append": false, 00:14:51.591 "compare": false, 00:14:51.591 "compare_and_write": false, 00:14:51.591 "abort": true, 00:14:51.591 "seek_hole": false, 00:14:51.591 "seek_data": false, 00:14:51.591 "copy": true, 00:14:51.591 "nvme_iov_md": false 00:14:51.591 }, 00:14:51.591 "memory_domains": [ 00:14:51.591 { 00:14:51.591 "dma_device_id": "system", 00:14:51.591 "dma_device_type": 1 00:14:51.591 }, 00:14:51.591 { 00:14:51.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.591 "dma_device_type": 2 00:14:51.591 } 00:14:51.591 ], 00:14:51.591 "driver_specific": {} 00:14:51.591 } 00:14:51.591 ] 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.591 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.592 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.592 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.592 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.592 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.592 "name": "Existed_Raid", 00:14:51.592 "uuid": "6afa99be-17e7-408e-a8e9-3ccc38f3cc15", 00:14:51.592 "strip_size_kb": 64, 00:14:51.592 "state": "configuring", 00:14:51.592 "raid_level": "concat", 00:14:51.592 "superblock": true, 00:14:51.592 "num_base_bdevs": 4, 00:14:51.592 "num_base_bdevs_discovered": 1, 00:14:51.592 "num_base_bdevs_operational": 4, 00:14:51.592 "base_bdevs_list": [ 00:14:51.592 { 00:14:51.592 "name": "BaseBdev1", 00:14:51.592 "uuid": "2cdecf5e-2d99-46ca-a1ce-edbc8ab6cf72", 00:14:51.592 "is_configured": true, 00:14:51.592 "data_offset": 2048, 00:14:51.592 "data_size": 63488 00:14:51.592 }, 00:14:51.592 { 00:14:51.592 "name": "BaseBdev2", 00:14:51.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.592 "is_configured": false, 00:14:51.592 "data_offset": 0, 00:14:51.592 "data_size": 0 00:14:51.592 }, 00:14:51.592 { 00:14:51.592 "name": "BaseBdev3", 00:14:51.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.592 "is_configured": false, 00:14:51.592 "data_offset": 0, 00:14:51.592 "data_size": 0 00:14:51.592 }, 00:14:51.592 { 00:14:51.592 "name": "BaseBdev4", 00:14:51.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.592 "is_configured": false, 00:14:51.592 "data_offset": 0, 00:14:51.592 "data_size": 0 00:14:51.592 } 00:14:51.592 ] 00:14:51.592 }' 00:14:51.592 07:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.592 07:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.158 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:52.158 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.158 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.158 [2024-11-20 07:11:49.238945] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.159 [2024-11-20 07:11:49.239137] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.159 [2024-11-20 07:11:49.247021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.159 [2024-11-20 07:11:49.249425] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.159 [2024-11-20 07:11:49.249485] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.159 [2024-11-20 07:11:49.249502] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:52.159 [2024-11-20 07:11:49.249530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:52.159 [2024-11-20 07:11:49.249540] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:52.159 [2024-11-20 07:11:49.249553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.159 "name": "Existed_Raid", 00:14:52.159 "uuid": "6f680cd3-e803-4815-b07a-47b2a95c4bdb", 00:14:52.159 "strip_size_kb": 64, 00:14:52.159 "state": "configuring", 00:14:52.159 "raid_level": "concat", 00:14:52.159 "superblock": true, 00:14:52.159 "num_base_bdevs": 4, 00:14:52.159 "num_base_bdevs_discovered": 1, 00:14:52.159 "num_base_bdevs_operational": 4, 00:14:52.159 "base_bdevs_list": [ 00:14:52.159 { 00:14:52.159 "name": "BaseBdev1", 00:14:52.159 "uuid": "2cdecf5e-2d99-46ca-a1ce-edbc8ab6cf72", 00:14:52.159 "is_configured": true, 00:14:52.159 "data_offset": 2048, 00:14:52.159 "data_size": 63488 00:14:52.159 }, 00:14:52.159 { 00:14:52.159 "name": "BaseBdev2", 00:14:52.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.159 "is_configured": false, 00:14:52.159 "data_offset": 0, 00:14:52.159 "data_size": 0 00:14:52.159 }, 00:14:52.159 { 00:14:52.159 "name": "BaseBdev3", 00:14:52.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.159 "is_configured": false, 00:14:52.159 "data_offset": 0, 00:14:52.159 "data_size": 0 00:14:52.159 }, 00:14:52.159 { 00:14:52.159 "name": "BaseBdev4", 00:14:52.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.159 "is_configured": false, 00:14:52.159 "data_offset": 0, 00:14:52.159 "data_size": 0 00:14:52.159 } 00:14:52.159 ] 00:14:52.159 }' 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.159 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.449 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:52.449 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.449 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.733 [2024-11-20 07:11:49.793045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.733 BaseBdev2 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.733 [ 00:14:52.733 { 00:14:52.733 "name": "BaseBdev2", 00:14:52.733 "aliases": [ 00:14:52.733 "1022fb95-b31b-4deb-93c6-279619655c4d" 00:14:52.733 ], 00:14:52.733 "product_name": "Malloc disk", 00:14:52.733 "block_size": 512, 00:14:52.733 "num_blocks": 65536, 00:14:52.733 "uuid": "1022fb95-b31b-4deb-93c6-279619655c4d", 00:14:52.733 "assigned_rate_limits": { 00:14:52.733 "rw_ios_per_sec": 0, 00:14:52.733 "rw_mbytes_per_sec": 0, 00:14:52.733 "r_mbytes_per_sec": 0, 00:14:52.733 "w_mbytes_per_sec": 0 00:14:52.733 }, 00:14:52.733 "claimed": true, 00:14:52.733 "claim_type": "exclusive_write", 00:14:52.733 "zoned": false, 00:14:52.733 "supported_io_types": { 00:14:52.733 "read": true, 00:14:52.733 "write": true, 00:14:52.733 "unmap": true, 00:14:52.733 "flush": true, 00:14:52.733 "reset": true, 00:14:52.733 "nvme_admin": false, 00:14:52.733 "nvme_io": false, 00:14:52.733 "nvme_io_md": false, 00:14:52.733 "write_zeroes": true, 00:14:52.733 "zcopy": true, 00:14:52.733 "get_zone_info": false, 00:14:52.733 "zone_management": false, 00:14:52.733 "zone_append": false, 00:14:52.733 "compare": false, 00:14:52.733 "compare_and_write": false, 00:14:52.733 "abort": true, 00:14:52.733 "seek_hole": false, 00:14:52.733 "seek_data": false, 00:14:52.733 "copy": true, 00:14:52.733 "nvme_iov_md": false 00:14:52.733 }, 00:14:52.733 "memory_domains": [ 00:14:52.733 { 00:14:52.733 "dma_device_id": "system", 00:14:52.733 "dma_device_type": 1 00:14:52.733 }, 00:14:52.733 { 00:14:52.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.733 "dma_device_type": 2 00:14:52.733 } 00:14:52.733 ], 00:14:52.733 "driver_specific": {} 00:14:52.733 } 00:14:52.733 ] 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.733 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.734 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.734 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.734 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.734 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.734 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.734 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.734 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.734 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.734 "name": "Existed_Raid", 00:14:52.734 "uuid": "6f680cd3-e803-4815-b07a-47b2a95c4bdb", 00:14:52.734 "strip_size_kb": 64, 00:14:52.734 "state": "configuring", 00:14:52.734 "raid_level": "concat", 00:14:52.734 "superblock": true, 00:14:52.734 "num_base_bdevs": 4, 00:14:52.734 "num_base_bdevs_discovered": 2, 00:14:52.734 "num_base_bdevs_operational": 4, 00:14:52.734 "base_bdevs_list": [ 00:14:52.734 { 00:14:52.734 "name": "BaseBdev1", 00:14:52.734 "uuid": "2cdecf5e-2d99-46ca-a1ce-edbc8ab6cf72", 00:14:52.734 "is_configured": true, 00:14:52.734 "data_offset": 2048, 00:14:52.734 "data_size": 63488 00:14:52.734 }, 00:14:52.734 { 00:14:52.734 "name": "BaseBdev2", 00:14:52.734 "uuid": "1022fb95-b31b-4deb-93c6-279619655c4d", 00:14:52.734 "is_configured": true, 00:14:52.734 "data_offset": 2048, 00:14:52.734 "data_size": 63488 00:14:52.734 }, 00:14:52.734 { 00:14:52.734 "name": "BaseBdev3", 00:14:52.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.734 "is_configured": false, 00:14:52.734 "data_offset": 0, 00:14:52.734 "data_size": 0 00:14:52.734 }, 00:14:52.734 { 00:14:52.734 "name": "BaseBdev4", 00:14:52.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.734 "is_configured": false, 00:14:52.734 "data_offset": 0, 00:14:52.734 "data_size": 0 00:14:52.734 } 00:14:52.734 ] 00:14:52.734 }' 00:14:52.734 07:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.734 07:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.302 BaseBdev3 00:14:53.302 [2024-11-20 07:11:50.378981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.302 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.302 [ 00:14:53.302 { 00:14:53.302 "name": "BaseBdev3", 00:14:53.302 "aliases": [ 00:14:53.302 "9897804f-eea0-433f-b090-2f0b0fca63e4" 00:14:53.302 ], 00:14:53.302 "product_name": "Malloc disk", 00:14:53.302 "block_size": 512, 00:14:53.302 "num_blocks": 65536, 00:14:53.302 "uuid": "9897804f-eea0-433f-b090-2f0b0fca63e4", 00:14:53.302 "assigned_rate_limits": { 00:14:53.302 "rw_ios_per_sec": 0, 00:14:53.302 "rw_mbytes_per_sec": 0, 00:14:53.302 "r_mbytes_per_sec": 0, 00:14:53.302 "w_mbytes_per_sec": 0 00:14:53.302 }, 00:14:53.302 "claimed": true, 00:14:53.302 "claim_type": "exclusive_write", 00:14:53.302 "zoned": false, 00:14:53.302 "supported_io_types": { 00:14:53.302 "read": true, 00:14:53.302 "write": true, 00:14:53.302 "unmap": true, 00:14:53.302 "flush": true, 00:14:53.302 "reset": true, 00:14:53.302 "nvme_admin": false, 00:14:53.302 "nvme_io": false, 00:14:53.302 "nvme_io_md": false, 00:14:53.302 "write_zeroes": true, 00:14:53.302 "zcopy": true, 00:14:53.302 "get_zone_info": false, 00:14:53.302 "zone_management": false, 00:14:53.302 "zone_append": false, 00:14:53.302 "compare": false, 00:14:53.302 "compare_and_write": false, 00:14:53.302 "abort": true, 00:14:53.302 "seek_hole": false, 00:14:53.302 "seek_data": false, 00:14:53.302 "copy": true, 00:14:53.302 "nvme_iov_md": false 00:14:53.302 }, 00:14:53.302 "memory_domains": [ 00:14:53.302 { 00:14:53.303 "dma_device_id": "system", 00:14:53.303 "dma_device_type": 1 00:14:53.303 }, 00:14:53.303 { 00:14:53.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.303 "dma_device_type": 2 00:14:53.303 } 00:14:53.303 ], 00:14:53.303 "driver_specific": {} 00:14:53.303 } 00:14:53.303 ] 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.303 "name": "Existed_Raid", 00:14:53.303 "uuid": "6f680cd3-e803-4815-b07a-47b2a95c4bdb", 00:14:53.303 "strip_size_kb": 64, 00:14:53.303 "state": "configuring", 00:14:53.303 "raid_level": "concat", 00:14:53.303 "superblock": true, 00:14:53.303 "num_base_bdevs": 4, 00:14:53.303 "num_base_bdevs_discovered": 3, 00:14:53.303 "num_base_bdevs_operational": 4, 00:14:53.303 "base_bdevs_list": [ 00:14:53.303 { 00:14:53.303 "name": "BaseBdev1", 00:14:53.303 "uuid": "2cdecf5e-2d99-46ca-a1ce-edbc8ab6cf72", 00:14:53.303 "is_configured": true, 00:14:53.303 "data_offset": 2048, 00:14:53.303 "data_size": 63488 00:14:53.303 }, 00:14:53.303 { 00:14:53.303 "name": "BaseBdev2", 00:14:53.303 "uuid": "1022fb95-b31b-4deb-93c6-279619655c4d", 00:14:53.303 "is_configured": true, 00:14:53.303 "data_offset": 2048, 00:14:53.303 "data_size": 63488 00:14:53.303 }, 00:14:53.303 { 00:14:53.303 "name": "BaseBdev3", 00:14:53.303 "uuid": "9897804f-eea0-433f-b090-2f0b0fca63e4", 00:14:53.303 "is_configured": true, 00:14:53.303 "data_offset": 2048, 00:14:53.303 "data_size": 63488 00:14:53.303 }, 00:14:53.303 { 00:14:53.303 "name": "BaseBdev4", 00:14:53.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.303 "is_configured": false, 00:14:53.303 "data_offset": 0, 00:14:53.303 "data_size": 0 00:14:53.303 } 00:14:53.303 ] 00:14:53.303 }' 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.303 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.872 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:53.872 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.872 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.872 [2024-11-20 07:11:50.937793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:53.872 [2024-11-20 07:11:50.938165] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:53.872 [2024-11-20 07:11:50.938185] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:53.872 BaseBdev4 00:14:53.872 [2024-11-20 07:11:50.938518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:53.872 [2024-11-20 07:11:50.938722] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:53.872 [2024-11-20 07:11:50.938744] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:53.872 [2024-11-20 07:11:50.938941] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.872 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.872 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:14:53.872 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:53.872 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:53.872 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:53.872 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.873 [ 00:14:53.873 { 00:14:53.873 "name": "BaseBdev4", 00:14:53.873 "aliases": [ 00:14:53.873 "b176878d-6173-4e59-86f7-fcd5889e557c" 00:14:53.873 ], 00:14:53.873 "product_name": "Malloc disk", 00:14:53.873 "block_size": 512, 00:14:53.873 "num_blocks": 65536, 00:14:53.873 "uuid": "b176878d-6173-4e59-86f7-fcd5889e557c", 00:14:53.873 "assigned_rate_limits": { 00:14:53.873 "rw_ios_per_sec": 0, 00:14:53.873 "rw_mbytes_per_sec": 0, 00:14:53.873 "r_mbytes_per_sec": 0, 00:14:53.873 "w_mbytes_per_sec": 0 00:14:53.873 }, 00:14:53.873 "claimed": true, 00:14:53.873 "claim_type": "exclusive_write", 00:14:53.873 "zoned": false, 00:14:53.873 "supported_io_types": { 00:14:53.873 "read": true, 00:14:53.873 "write": true, 00:14:53.873 "unmap": true, 00:14:53.873 "flush": true, 00:14:53.873 "reset": true, 00:14:53.873 "nvme_admin": false, 00:14:53.873 "nvme_io": false, 00:14:53.873 "nvme_io_md": false, 00:14:53.873 "write_zeroes": true, 00:14:53.873 "zcopy": true, 00:14:53.873 "get_zone_info": false, 00:14:53.873 "zone_management": false, 00:14:53.873 "zone_append": false, 00:14:53.873 "compare": false, 00:14:53.873 "compare_and_write": false, 00:14:53.873 "abort": true, 00:14:53.873 "seek_hole": false, 00:14:53.873 "seek_data": false, 00:14:53.873 "copy": true, 00:14:53.873 "nvme_iov_md": false 00:14:53.873 }, 00:14:53.873 "memory_domains": [ 00:14:53.873 { 00:14:53.873 "dma_device_id": "system", 00:14:53.873 "dma_device_type": 1 00:14:53.873 }, 00:14:53.873 { 00:14:53.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.873 "dma_device_type": 2 00:14:53.873 } 00:14:53.873 ], 00:14:53.873 "driver_specific": {} 00:14:53.873 } 00:14:53.873 ] 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.873 07:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.873 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.873 "name": "Existed_Raid", 00:14:53.873 "uuid": "6f680cd3-e803-4815-b07a-47b2a95c4bdb", 00:14:53.873 "strip_size_kb": 64, 00:14:53.873 "state": "online", 00:14:53.873 "raid_level": "concat", 00:14:53.873 "superblock": true, 00:14:53.873 "num_base_bdevs": 4, 00:14:53.873 "num_base_bdevs_discovered": 4, 00:14:53.873 "num_base_bdevs_operational": 4, 00:14:53.873 "base_bdevs_list": [ 00:14:53.873 { 00:14:53.873 "name": "BaseBdev1", 00:14:53.873 "uuid": "2cdecf5e-2d99-46ca-a1ce-edbc8ab6cf72", 00:14:53.873 "is_configured": true, 00:14:53.873 "data_offset": 2048, 00:14:53.873 "data_size": 63488 00:14:53.873 }, 00:14:53.873 { 00:14:53.873 "name": "BaseBdev2", 00:14:53.873 "uuid": "1022fb95-b31b-4deb-93c6-279619655c4d", 00:14:53.873 "is_configured": true, 00:14:53.873 "data_offset": 2048, 00:14:53.873 "data_size": 63488 00:14:53.873 }, 00:14:53.873 { 00:14:53.873 "name": "BaseBdev3", 00:14:53.873 "uuid": "9897804f-eea0-433f-b090-2f0b0fca63e4", 00:14:53.873 "is_configured": true, 00:14:53.873 "data_offset": 2048, 00:14:53.873 "data_size": 63488 00:14:53.873 }, 00:14:53.873 { 00:14:53.873 "name": "BaseBdev4", 00:14:53.873 "uuid": "b176878d-6173-4e59-86f7-fcd5889e557c", 00:14:53.873 "is_configured": true, 00:14:53.873 "data_offset": 2048, 00:14:53.873 "data_size": 63488 00:14:53.873 } 00:14:53.873 ] 00:14:53.873 }' 00:14:53.873 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.873 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.442 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:54.442 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:54.442 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:54.442 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:54.442 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:54.442 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:54.442 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:54.442 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.442 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.442 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:54.442 [2024-11-20 07:11:51.498486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:54.442 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.442 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:54.442 "name": "Existed_Raid", 00:14:54.442 "aliases": [ 00:14:54.442 "6f680cd3-e803-4815-b07a-47b2a95c4bdb" 00:14:54.442 ], 00:14:54.442 "product_name": "Raid Volume", 00:14:54.442 "block_size": 512, 00:14:54.442 "num_blocks": 253952, 00:14:54.442 "uuid": "6f680cd3-e803-4815-b07a-47b2a95c4bdb", 00:14:54.442 "assigned_rate_limits": { 00:14:54.442 "rw_ios_per_sec": 0, 00:14:54.442 "rw_mbytes_per_sec": 0, 00:14:54.442 "r_mbytes_per_sec": 0, 00:14:54.442 "w_mbytes_per_sec": 0 00:14:54.442 }, 00:14:54.442 "claimed": false, 00:14:54.442 "zoned": false, 00:14:54.442 "supported_io_types": { 00:14:54.442 "read": true, 00:14:54.442 "write": true, 00:14:54.442 "unmap": true, 00:14:54.442 "flush": true, 00:14:54.442 "reset": true, 00:14:54.442 "nvme_admin": false, 00:14:54.442 "nvme_io": false, 00:14:54.442 "nvme_io_md": false, 00:14:54.442 "write_zeroes": true, 00:14:54.442 "zcopy": false, 00:14:54.442 "get_zone_info": false, 00:14:54.442 "zone_management": false, 00:14:54.442 "zone_append": false, 00:14:54.442 "compare": false, 00:14:54.442 "compare_and_write": false, 00:14:54.442 "abort": false, 00:14:54.442 "seek_hole": false, 00:14:54.442 "seek_data": false, 00:14:54.442 "copy": false, 00:14:54.442 "nvme_iov_md": false 00:14:54.442 }, 00:14:54.442 "memory_domains": [ 00:14:54.442 { 00:14:54.442 "dma_device_id": "system", 00:14:54.442 "dma_device_type": 1 00:14:54.442 }, 00:14:54.442 { 00:14:54.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.442 "dma_device_type": 2 00:14:54.442 }, 00:14:54.442 { 00:14:54.442 "dma_device_id": "system", 00:14:54.442 "dma_device_type": 1 00:14:54.442 }, 00:14:54.442 { 00:14:54.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.442 "dma_device_type": 2 00:14:54.442 }, 00:14:54.442 { 00:14:54.442 "dma_device_id": "system", 00:14:54.442 "dma_device_type": 1 00:14:54.442 }, 00:14:54.442 { 00:14:54.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.442 "dma_device_type": 2 00:14:54.442 }, 00:14:54.442 { 00:14:54.442 "dma_device_id": "system", 00:14:54.443 "dma_device_type": 1 00:14:54.443 }, 00:14:54.443 { 00:14:54.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.443 "dma_device_type": 2 00:14:54.443 } 00:14:54.443 ], 00:14:54.443 "driver_specific": { 00:14:54.443 "raid": { 00:14:54.443 "uuid": "6f680cd3-e803-4815-b07a-47b2a95c4bdb", 00:14:54.443 "strip_size_kb": 64, 00:14:54.443 "state": "online", 00:14:54.443 "raid_level": "concat", 00:14:54.443 "superblock": true, 00:14:54.443 "num_base_bdevs": 4, 00:14:54.443 "num_base_bdevs_discovered": 4, 00:14:54.443 "num_base_bdevs_operational": 4, 00:14:54.443 "base_bdevs_list": [ 00:14:54.443 { 00:14:54.443 "name": "BaseBdev1", 00:14:54.443 "uuid": "2cdecf5e-2d99-46ca-a1ce-edbc8ab6cf72", 00:14:54.443 "is_configured": true, 00:14:54.443 "data_offset": 2048, 00:14:54.443 "data_size": 63488 00:14:54.443 }, 00:14:54.443 { 00:14:54.443 "name": "BaseBdev2", 00:14:54.443 "uuid": "1022fb95-b31b-4deb-93c6-279619655c4d", 00:14:54.443 "is_configured": true, 00:14:54.443 "data_offset": 2048, 00:14:54.443 "data_size": 63488 00:14:54.443 }, 00:14:54.443 { 00:14:54.443 "name": "BaseBdev3", 00:14:54.443 "uuid": "9897804f-eea0-433f-b090-2f0b0fca63e4", 00:14:54.443 "is_configured": true, 00:14:54.443 "data_offset": 2048, 00:14:54.443 "data_size": 63488 00:14:54.443 }, 00:14:54.443 { 00:14:54.443 "name": "BaseBdev4", 00:14:54.443 "uuid": "b176878d-6173-4e59-86f7-fcd5889e557c", 00:14:54.443 "is_configured": true, 00:14:54.443 "data_offset": 2048, 00:14:54.443 "data_size": 63488 00:14:54.443 } 00:14:54.443 ] 00:14:54.443 } 00:14:54.443 } 00:14:54.443 }' 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:54.443 BaseBdev2 00:14:54.443 BaseBdev3 00:14:54.443 BaseBdev4' 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.443 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.703 [2024-11-20 07:11:51.854204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:54.703 [2024-11-20 07:11:51.854378] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.703 [2024-11-20 07:11:51.854562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.703 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.703 "name": "Existed_Raid", 00:14:54.703 "uuid": "6f680cd3-e803-4815-b07a-47b2a95c4bdb", 00:14:54.703 "strip_size_kb": 64, 00:14:54.703 "state": "offline", 00:14:54.703 "raid_level": "concat", 00:14:54.703 "superblock": true, 00:14:54.703 "num_base_bdevs": 4, 00:14:54.703 "num_base_bdevs_discovered": 3, 00:14:54.703 "num_base_bdevs_operational": 3, 00:14:54.703 "base_bdevs_list": [ 00:14:54.703 { 00:14:54.703 "name": null, 00:14:54.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.703 "is_configured": false, 00:14:54.703 "data_offset": 0, 00:14:54.703 "data_size": 63488 00:14:54.703 }, 00:14:54.703 { 00:14:54.703 "name": "BaseBdev2", 00:14:54.703 "uuid": "1022fb95-b31b-4deb-93c6-279619655c4d", 00:14:54.703 "is_configured": true, 00:14:54.703 "data_offset": 2048, 00:14:54.703 "data_size": 63488 00:14:54.703 }, 00:14:54.703 { 00:14:54.703 "name": "BaseBdev3", 00:14:54.703 "uuid": "9897804f-eea0-433f-b090-2f0b0fca63e4", 00:14:54.703 "is_configured": true, 00:14:54.703 "data_offset": 2048, 00:14:54.703 "data_size": 63488 00:14:54.703 }, 00:14:54.703 { 00:14:54.703 "name": "BaseBdev4", 00:14:54.703 "uuid": "b176878d-6173-4e59-86f7-fcd5889e557c", 00:14:54.703 "is_configured": true, 00:14:54.704 "data_offset": 2048, 00:14:54.704 "data_size": 63488 00:14:54.704 } 00:14:54.704 ] 00:14:54.704 }' 00:14:54.704 07:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.704 07:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.271 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:55.271 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:55.271 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.271 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:55.271 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.271 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.271 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.271 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:55.271 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:55.271 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:55.271 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.271 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.271 [2024-11-20 07:11:52.516224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.531 [2024-11-20 07:11:52.665952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.531 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.531 [2024-11-20 07:11:52.802725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:55.531 [2024-11-20 07:11:52.802918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.792 BaseBdev2 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.792 07:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.792 [ 00:14:55.792 { 00:14:55.792 "name": "BaseBdev2", 00:14:55.792 "aliases": [ 00:14:55.792 "f4f55f1f-4706-43ad-a3ba-f42c420cc13a" 00:14:55.792 ], 00:14:55.792 "product_name": "Malloc disk", 00:14:55.792 "block_size": 512, 00:14:55.792 "num_blocks": 65536, 00:14:55.792 "uuid": "f4f55f1f-4706-43ad-a3ba-f42c420cc13a", 00:14:55.792 "assigned_rate_limits": { 00:14:55.792 "rw_ios_per_sec": 0, 00:14:55.792 "rw_mbytes_per_sec": 0, 00:14:55.792 "r_mbytes_per_sec": 0, 00:14:55.792 "w_mbytes_per_sec": 0 00:14:55.792 }, 00:14:55.792 "claimed": false, 00:14:55.792 "zoned": false, 00:14:55.792 "supported_io_types": { 00:14:55.792 "read": true, 00:14:55.792 "write": true, 00:14:55.792 "unmap": true, 00:14:55.792 "flush": true, 00:14:55.792 "reset": true, 00:14:55.792 "nvme_admin": false, 00:14:55.792 "nvme_io": false, 00:14:55.792 "nvme_io_md": false, 00:14:55.792 "write_zeroes": true, 00:14:55.792 "zcopy": true, 00:14:55.792 "get_zone_info": false, 00:14:55.792 "zone_management": false, 00:14:55.792 "zone_append": false, 00:14:55.792 "compare": false, 00:14:55.792 "compare_and_write": false, 00:14:55.792 "abort": true, 00:14:55.792 "seek_hole": false, 00:14:55.792 "seek_data": false, 00:14:55.792 "copy": true, 00:14:55.792 "nvme_iov_md": false 00:14:55.792 }, 00:14:55.792 "memory_domains": [ 00:14:55.792 { 00:14:55.792 "dma_device_id": "system", 00:14:55.792 "dma_device_type": 1 00:14:55.792 }, 00:14:55.792 { 00:14:55.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.792 "dma_device_type": 2 00:14:55.792 } 00:14:55.792 ], 00:14:55.792 "driver_specific": {} 00:14:55.792 } 00:14:55.792 ] 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.792 BaseBdev3 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.792 [ 00:14:55.792 { 00:14:55.792 "name": "BaseBdev3", 00:14:55.792 "aliases": [ 00:14:55.792 "bad11f61-bed6-465d-a028-6fcaea0e6b48" 00:14:55.792 ], 00:14:55.792 "product_name": "Malloc disk", 00:14:55.792 "block_size": 512, 00:14:55.792 "num_blocks": 65536, 00:14:55.792 "uuid": "bad11f61-bed6-465d-a028-6fcaea0e6b48", 00:14:55.792 "assigned_rate_limits": { 00:14:55.792 "rw_ios_per_sec": 0, 00:14:55.792 "rw_mbytes_per_sec": 0, 00:14:55.792 "r_mbytes_per_sec": 0, 00:14:55.792 "w_mbytes_per_sec": 0 00:14:55.792 }, 00:14:55.792 "claimed": false, 00:14:55.792 "zoned": false, 00:14:55.792 "supported_io_types": { 00:14:55.792 "read": true, 00:14:55.792 "write": true, 00:14:55.792 "unmap": true, 00:14:55.792 "flush": true, 00:14:55.792 "reset": true, 00:14:55.792 "nvme_admin": false, 00:14:55.792 "nvme_io": false, 00:14:55.792 "nvme_io_md": false, 00:14:55.792 "write_zeroes": true, 00:14:55.792 "zcopy": true, 00:14:55.792 "get_zone_info": false, 00:14:55.792 "zone_management": false, 00:14:55.792 "zone_append": false, 00:14:55.792 "compare": false, 00:14:55.792 "compare_and_write": false, 00:14:55.792 "abort": true, 00:14:55.792 "seek_hole": false, 00:14:55.792 "seek_data": false, 00:14:55.792 "copy": true, 00:14:55.792 "nvme_iov_md": false 00:14:55.792 }, 00:14:55.792 "memory_domains": [ 00:14:55.792 { 00:14:55.792 "dma_device_id": "system", 00:14:55.792 "dma_device_type": 1 00:14:55.792 }, 00:14:55.792 { 00:14:55.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.792 "dma_device_type": 2 00:14:55.792 } 00:14:55.792 ], 00:14:55.792 "driver_specific": {} 00:14:55.792 } 00:14:55.792 ] 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.792 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.052 BaseBdev4 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.052 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.052 [ 00:14:56.052 { 00:14:56.052 "name": "BaseBdev4", 00:14:56.052 "aliases": [ 00:14:56.052 "b90a515a-4bf4-441b-b736-b8bba3245cd5" 00:14:56.052 ], 00:14:56.052 "product_name": "Malloc disk", 00:14:56.052 "block_size": 512, 00:14:56.052 "num_blocks": 65536, 00:14:56.052 "uuid": "b90a515a-4bf4-441b-b736-b8bba3245cd5", 00:14:56.052 "assigned_rate_limits": { 00:14:56.052 "rw_ios_per_sec": 0, 00:14:56.052 "rw_mbytes_per_sec": 0, 00:14:56.052 "r_mbytes_per_sec": 0, 00:14:56.052 "w_mbytes_per_sec": 0 00:14:56.052 }, 00:14:56.052 "claimed": false, 00:14:56.052 "zoned": false, 00:14:56.052 "supported_io_types": { 00:14:56.052 "read": true, 00:14:56.052 "write": true, 00:14:56.052 "unmap": true, 00:14:56.052 "flush": true, 00:14:56.052 "reset": true, 00:14:56.052 "nvme_admin": false, 00:14:56.052 "nvme_io": false, 00:14:56.052 "nvme_io_md": false, 00:14:56.052 "write_zeroes": true, 00:14:56.052 "zcopy": true, 00:14:56.052 "get_zone_info": false, 00:14:56.052 "zone_management": false, 00:14:56.052 "zone_append": false, 00:14:56.052 "compare": false, 00:14:56.052 "compare_and_write": false, 00:14:56.052 "abort": true, 00:14:56.052 "seek_hole": false, 00:14:56.052 "seek_data": false, 00:14:56.052 "copy": true, 00:14:56.052 "nvme_iov_md": false 00:14:56.052 }, 00:14:56.052 "memory_domains": [ 00:14:56.052 { 00:14:56.052 "dma_device_id": "system", 00:14:56.052 "dma_device_type": 1 00:14:56.052 }, 00:14:56.053 { 00:14:56.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.053 "dma_device_type": 2 00:14:56.053 } 00:14:56.053 ], 00:14:56.053 "driver_specific": {} 00:14:56.053 } 00:14:56.053 ] 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.053 [2024-11-20 07:11:53.174921] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:56.053 [2024-11-20 07:11:53.175109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:56.053 [2024-11-20 07:11:53.175252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.053 [2024-11-20 07:11:53.177771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:56.053 [2024-11-20 07:11:53.177993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.053 "name": "Existed_Raid", 00:14:56.053 "uuid": "1c580e4b-683b-4299-8f02-1b82ec4b1db6", 00:14:56.053 "strip_size_kb": 64, 00:14:56.053 "state": "configuring", 00:14:56.053 "raid_level": "concat", 00:14:56.053 "superblock": true, 00:14:56.053 "num_base_bdevs": 4, 00:14:56.053 "num_base_bdevs_discovered": 3, 00:14:56.053 "num_base_bdevs_operational": 4, 00:14:56.053 "base_bdevs_list": [ 00:14:56.053 { 00:14:56.053 "name": "BaseBdev1", 00:14:56.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.053 "is_configured": false, 00:14:56.053 "data_offset": 0, 00:14:56.053 "data_size": 0 00:14:56.053 }, 00:14:56.053 { 00:14:56.053 "name": "BaseBdev2", 00:14:56.053 "uuid": "f4f55f1f-4706-43ad-a3ba-f42c420cc13a", 00:14:56.053 "is_configured": true, 00:14:56.053 "data_offset": 2048, 00:14:56.053 "data_size": 63488 00:14:56.053 }, 00:14:56.053 { 00:14:56.053 "name": "BaseBdev3", 00:14:56.053 "uuid": "bad11f61-bed6-465d-a028-6fcaea0e6b48", 00:14:56.053 "is_configured": true, 00:14:56.053 "data_offset": 2048, 00:14:56.053 "data_size": 63488 00:14:56.053 }, 00:14:56.053 { 00:14:56.053 "name": "BaseBdev4", 00:14:56.053 "uuid": "b90a515a-4bf4-441b-b736-b8bba3245cd5", 00:14:56.053 "is_configured": true, 00:14:56.053 "data_offset": 2048, 00:14:56.053 "data_size": 63488 00:14:56.053 } 00:14:56.053 ] 00:14:56.053 }' 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.053 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.621 [2024-11-20 07:11:53.695084] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.621 "name": "Existed_Raid", 00:14:56.621 "uuid": "1c580e4b-683b-4299-8f02-1b82ec4b1db6", 00:14:56.621 "strip_size_kb": 64, 00:14:56.621 "state": "configuring", 00:14:56.621 "raid_level": "concat", 00:14:56.621 "superblock": true, 00:14:56.621 "num_base_bdevs": 4, 00:14:56.621 "num_base_bdevs_discovered": 2, 00:14:56.621 "num_base_bdevs_operational": 4, 00:14:56.621 "base_bdevs_list": [ 00:14:56.621 { 00:14:56.621 "name": "BaseBdev1", 00:14:56.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.621 "is_configured": false, 00:14:56.621 "data_offset": 0, 00:14:56.621 "data_size": 0 00:14:56.621 }, 00:14:56.621 { 00:14:56.621 "name": null, 00:14:56.621 "uuid": "f4f55f1f-4706-43ad-a3ba-f42c420cc13a", 00:14:56.621 "is_configured": false, 00:14:56.621 "data_offset": 0, 00:14:56.621 "data_size": 63488 00:14:56.621 }, 00:14:56.621 { 00:14:56.621 "name": "BaseBdev3", 00:14:56.621 "uuid": "bad11f61-bed6-465d-a028-6fcaea0e6b48", 00:14:56.621 "is_configured": true, 00:14:56.621 "data_offset": 2048, 00:14:56.621 "data_size": 63488 00:14:56.621 }, 00:14:56.621 { 00:14:56.621 "name": "BaseBdev4", 00:14:56.621 "uuid": "b90a515a-4bf4-441b-b736-b8bba3245cd5", 00:14:56.621 "is_configured": true, 00:14:56.621 "data_offset": 2048, 00:14:56.621 "data_size": 63488 00:14:56.621 } 00:14:56.621 ] 00:14:56.621 }' 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.621 07:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.188 [2024-11-20 07:11:54.312888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.188 BaseBdev1 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.188 [ 00:14:57.188 { 00:14:57.188 "name": "BaseBdev1", 00:14:57.188 "aliases": [ 00:14:57.188 "542fe389-8eaa-43fa-af2b-46bbe0892d1f" 00:14:57.188 ], 00:14:57.188 "product_name": "Malloc disk", 00:14:57.188 "block_size": 512, 00:14:57.188 "num_blocks": 65536, 00:14:57.188 "uuid": "542fe389-8eaa-43fa-af2b-46bbe0892d1f", 00:14:57.188 "assigned_rate_limits": { 00:14:57.188 "rw_ios_per_sec": 0, 00:14:57.188 "rw_mbytes_per_sec": 0, 00:14:57.188 "r_mbytes_per_sec": 0, 00:14:57.188 "w_mbytes_per_sec": 0 00:14:57.188 }, 00:14:57.188 "claimed": true, 00:14:57.188 "claim_type": "exclusive_write", 00:14:57.188 "zoned": false, 00:14:57.188 "supported_io_types": { 00:14:57.188 "read": true, 00:14:57.188 "write": true, 00:14:57.188 "unmap": true, 00:14:57.188 "flush": true, 00:14:57.188 "reset": true, 00:14:57.188 "nvme_admin": false, 00:14:57.188 "nvme_io": false, 00:14:57.188 "nvme_io_md": false, 00:14:57.188 "write_zeroes": true, 00:14:57.188 "zcopy": true, 00:14:57.188 "get_zone_info": false, 00:14:57.188 "zone_management": false, 00:14:57.188 "zone_append": false, 00:14:57.188 "compare": false, 00:14:57.188 "compare_and_write": false, 00:14:57.188 "abort": true, 00:14:57.188 "seek_hole": false, 00:14:57.188 "seek_data": false, 00:14:57.188 "copy": true, 00:14:57.188 "nvme_iov_md": false 00:14:57.188 }, 00:14:57.188 "memory_domains": [ 00:14:57.188 { 00:14:57.188 "dma_device_id": "system", 00:14:57.188 "dma_device_type": 1 00:14:57.188 }, 00:14:57.188 { 00:14:57.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.188 "dma_device_type": 2 00:14:57.188 } 00:14:57.188 ], 00:14:57.188 "driver_specific": {} 00:14:57.188 } 00:14:57.188 ] 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.188 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.189 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.189 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.189 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.189 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.189 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.189 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.189 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.189 "name": "Existed_Raid", 00:14:57.189 "uuid": "1c580e4b-683b-4299-8f02-1b82ec4b1db6", 00:14:57.189 "strip_size_kb": 64, 00:14:57.189 "state": "configuring", 00:14:57.189 "raid_level": "concat", 00:14:57.189 "superblock": true, 00:14:57.189 "num_base_bdevs": 4, 00:14:57.189 "num_base_bdevs_discovered": 3, 00:14:57.189 "num_base_bdevs_operational": 4, 00:14:57.189 "base_bdevs_list": [ 00:14:57.189 { 00:14:57.189 "name": "BaseBdev1", 00:14:57.189 "uuid": "542fe389-8eaa-43fa-af2b-46bbe0892d1f", 00:14:57.189 "is_configured": true, 00:14:57.189 "data_offset": 2048, 00:14:57.189 "data_size": 63488 00:14:57.189 }, 00:14:57.189 { 00:14:57.189 "name": null, 00:14:57.189 "uuid": "f4f55f1f-4706-43ad-a3ba-f42c420cc13a", 00:14:57.189 "is_configured": false, 00:14:57.189 "data_offset": 0, 00:14:57.189 "data_size": 63488 00:14:57.189 }, 00:14:57.189 { 00:14:57.189 "name": "BaseBdev3", 00:14:57.189 "uuid": "bad11f61-bed6-465d-a028-6fcaea0e6b48", 00:14:57.189 "is_configured": true, 00:14:57.189 "data_offset": 2048, 00:14:57.189 "data_size": 63488 00:14:57.189 }, 00:14:57.189 { 00:14:57.189 "name": "BaseBdev4", 00:14:57.189 "uuid": "b90a515a-4bf4-441b-b736-b8bba3245cd5", 00:14:57.189 "is_configured": true, 00:14:57.189 "data_offset": 2048, 00:14:57.189 "data_size": 63488 00:14:57.189 } 00:14:57.189 ] 00:14:57.189 }' 00:14:57.189 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.189 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.756 [2024-11-20 07:11:54.953166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.756 07:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.756 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.756 "name": "Existed_Raid", 00:14:57.756 "uuid": "1c580e4b-683b-4299-8f02-1b82ec4b1db6", 00:14:57.756 "strip_size_kb": 64, 00:14:57.756 "state": "configuring", 00:14:57.756 "raid_level": "concat", 00:14:57.756 "superblock": true, 00:14:57.756 "num_base_bdevs": 4, 00:14:57.756 "num_base_bdevs_discovered": 2, 00:14:57.756 "num_base_bdevs_operational": 4, 00:14:57.756 "base_bdevs_list": [ 00:14:57.756 { 00:14:57.756 "name": "BaseBdev1", 00:14:57.756 "uuid": "542fe389-8eaa-43fa-af2b-46bbe0892d1f", 00:14:57.756 "is_configured": true, 00:14:57.756 "data_offset": 2048, 00:14:57.756 "data_size": 63488 00:14:57.756 }, 00:14:57.756 { 00:14:57.756 "name": null, 00:14:57.756 "uuid": "f4f55f1f-4706-43ad-a3ba-f42c420cc13a", 00:14:57.756 "is_configured": false, 00:14:57.756 "data_offset": 0, 00:14:57.756 "data_size": 63488 00:14:57.756 }, 00:14:57.756 { 00:14:57.756 "name": null, 00:14:57.756 "uuid": "bad11f61-bed6-465d-a028-6fcaea0e6b48", 00:14:57.756 "is_configured": false, 00:14:57.756 "data_offset": 0, 00:14:57.756 "data_size": 63488 00:14:57.756 }, 00:14:57.756 { 00:14:57.756 "name": "BaseBdev4", 00:14:57.756 "uuid": "b90a515a-4bf4-441b-b736-b8bba3245cd5", 00:14:57.756 "is_configured": true, 00:14:57.756 "data_offset": 2048, 00:14:57.756 "data_size": 63488 00:14:57.756 } 00:14:57.756 ] 00:14:57.756 }' 00:14:57.756 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.756 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.323 [2024-11-20 07:11:55.553307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:58.323 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.324 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.324 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.324 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.324 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.324 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.324 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.324 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.324 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.324 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.324 "name": "Existed_Raid", 00:14:58.324 "uuid": "1c580e4b-683b-4299-8f02-1b82ec4b1db6", 00:14:58.324 "strip_size_kb": 64, 00:14:58.324 "state": "configuring", 00:14:58.324 "raid_level": "concat", 00:14:58.324 "superblock": true, 00:14:58.324 "num_base_bdevs": 4, 00:14:58.324 "num_base_bdevs_discovered": 3, 00:14:58.324 "num_base_bdevs_operational": 4, 00:14:58.324 "base_bdevs_list": [ 00:14:58.324 { 00:14:58.324 "name": "BaseBdev1", 00:14:58.324 "uuid": "542fe389-8eaa-43fa-af2b-46bbe0892d1f", 00:14:58.324 "is_configured": true, 00:14:58.324 "data_offset": 2048, 00:14:58.324 "data_size": 63488 00:14:58.324 }, 00:14:58.324 { 00:14:58.324 "name": null, 00:14:58.324 "uuid": "f4f55f1f-4706-43ad-a3ba-f42c420cc13a", 00:14:58.324 "is_configured": false, 00:14:58.324 "data_offset": 0, 00:14:58.324 "data_size": 63488 00:14:58.324 }, 00:14:58.324 { 00:14:58.324 "name": "BaseBdev3", 00:14:58.324 "uuid": "bad11f61-bed6-465d-a028-6fcaea0e6b48", 00:14:58.324 "is_configured": true, 00:14:58.324 "data_offset": 2048, 00:14:58.324 "data_size": 63488 00:14:58.324 }, 00:14:58.324 { 00:14:58.324 "name": "BaseBdev4", 00:14:58.324 "uuid": "b90a515a-4bf4-441b-b736-b8bba3245cd5", 00:14:58.324 "is_configured": true, 00:14:58.324 "data_offset": 2048, 00:14:58.324 "data_size": 63488 00:14:58.324 } 00:14:58.324 ] 00:14:58.324 }' 00:14:58.324 07:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.324 07:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.891 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:58.891 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.891 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.891 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.891 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.891 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:58.891 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:58.891 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.891 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.891 [2024-11-20 07:11:56.133510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.150 "name": "Existed_Raid", 00:14:59.150 "uuid": "1c580e4b-683b-4299-8f02-1b82ec4b1db6", 00:14:59.150 "strip_size_kb": 64, 00:14:59.150 "state": "configuring", 00:14:59.150 "raid_level": "concat", 00:14:59.150 "superblock": true, 00:14:59.150 "num_base_bdevs": 4, 00:14:59.150 "num_base_bdevs_discovered": 2, 00:14:59.150 "num_base_bdevs_operational": 4, 00:14:59.150 "base_bdevs_list": [ 00:14:59.150 { 00:14:59.150 "name": null, 00:14:59.150 "uuid": "542fe389-8eaa-43fa-af2b-46bbe0892d1f", 00:14:59.150 "is_configured": false, 00:14:59.150 "data_offset": 0, 00:14:59.150 "data_size": 63488 00:14:59.150 }, 00:14:59.150 { 00:14:59.150 "name": null, 00:14:59.150 "uuid": "f4f55f1f-4706-43ad-a3ba-f42c420cc13a", 00:14:59.150 "is_configured": false, 00:14:59.150 "data_offset": 0, 00:14:59.150 "data_size": 63488 00:14:59.150 }, 00:14:59.150 { 00:14:59.150 "name": "BaseBdev3", 00:14:59.150 "uuid": "bad11f61-bed6-465d-a028-6fcaea0e6b48", 00:14:59.150 "is_configured": true, 00:14:59.150 "data_offset": 2048, 00:14:59.150 "data_size": 63488 00:14:59.150 }, 00:14:59.150 { 00:14:59.150 "name": "BaseBdev4", 00:14:59.150 "uuid": "b90a515a-4bf4-441b-b736-b8bba3245cd5", 00:14:59.150 "is_configured": true, 00:14:59.150 "data_offset": 2048, 00:14:59.150 "data_size": 63488 00:14:59.150 } 00:14:59.150 ] 00:14:59.150 }' 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.150 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.408 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:59.408 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.408 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.408 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.667 [2024-11-20 07:11:56.767149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.667 "name": "Existed_Raid", 00:14:59.667 "uuid": "1c580e4b-683b-4299-8f02-1b82ec4b1db6", 00:14:59.667 "strip_size_kb": 64, 00:14:59.667 "state": "configuring", 00:14:59.667 "raid_level": "concat", 00:14:59.667 "superblock": true, 00:14:59.667 "num_base_bdevs": 4, 00:14:59.667 "num_base_bdevs_discovered": 3, 00:14:59.667 "num_base_bdevs_operational": 4, 00:14:59.667 "base_bdevs_list": [ 00:14:59.667 { 00:14:59.667 "name": null, 00:14:59.667 "uuid": "542fe389-8eaa-43fa-af2b-46bbe0892d1f", 00:14:59.667 "is_configured": false, 00:14:59.667 "data_offset": 0, 00:14:59.667 "data_size": 63488 00:14:59.667 }, 00:14:59.667 { 00:14:59.667 "name": "BaseBdev2", 00:14:59.667 "uuid": "f4f55f1f-4706-43ad-a3ba-f42c420cc13a", 00:14:59.667 "is_configured": true, 00:14:59.667 "data_offset": 2048, 00:14:59.667 "data_size": 63488 00:14:59.667 }, 00:14:59.667 { 00:14:59.667 "name": "BaseBdev3", 00:14:59.667 "uuid": "bad11f61-bed6-465d-a028-6fcaea0e6b48", 00:14:59.667 "is_configured": true, 00:14:59.667 "data_offset": 2048, 00:14:59.667 "data_size": 63488 00:14:59.667 }, 00:14:59.667 { 00:14:59.667 "name": "BaseBdev4", 00:14:59.667 "uuid": "b90a515a-4bf4-441b-b736-b8bba3245cd5", 00:14:59.667 "is_configured": true, 00:14:59.667 "data_offset": 2048, 00:14:59.667 "data_size": 63488 00:14:59.667 } 00:14:59.667 ] 00:14:59.667 }' 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.667 07:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 542fe389-8eaa-43fa-af2b-46bbe0892d1f 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.235 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.235 NewBaseBdev 00:15:00.235 [2024-11-20 07:11:57.453955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:00.235 [2024-11-20 07:11:57.454256] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:00.235 [2024-11-20 07:11:57.454274] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:00.235 [2024-11-20 07:11:57.454590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:00.236 [2024-11-20 07:11:57.454776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:00.236 [2024-11-20 07:11:57.454798] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:00.236 [2024-11-20 07:11:57.454976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.236 [ 00:15:00.236 { 00:15:00.236 "name": "NewBaseBdev", 00:15:00.236 "aliases": [ 00:15:00.236 "542fe389-8eaa-43fa-af2b-46bbe0892d1f" 00:15:00.236 ], 00:15:00.236 "product_name": "Malloc disk", 00:15:00.236 "block_size": 512, 00:15:00.236 "num_blocks": 65536, 00:15:00.236 "uuid": "542fe389-8eaa-43fa-af2b-46bbe0892d1f", 00:15:00.236 "assigned_rate_limits": { 00:15:00.236 "rw_ios_per_sec": 0, 00:15:00.236 "rw_mbytes_per_sec": 0, 00:15:00.236 "r_mbytes_per_sec": 0, 00:15:00.236 "w_mbytes_per_sec": 0 00:15:00.236 }, 00:15:00.236 "claimed": true, 00:15:00.236 "claim_type": "exclusive_write", 00:15:00.236 "zoned": false, 00:15:00.236 "supported_io_types": { 00:15:00.236 "read": true, 00:15:00.236 "write": true, 00:15:00.236 "unmap": true, 00:15:00.236 "flush": true, 00:15:00.236 "reset": true, 00:15:00.236 "nvme_admin": false, 00:15:00.236 "nvme_io": false, 00:15:00.236 "nvme_io_md": false, 00:15:00.236 "write_zeroes": true, 00:15:00.236 "zcopy": true, 00:15:00.236 "get_zone_info": false, 00:15:00.236 "zone_management": false, 00:15:00.236 "zone_append": false, 00:15:00.236 "compare": false, 00:15:00.236 "compare_and_write": false, 00:15:00.236 "abort": true, 00:15:00.236 "seek_hole": false, 00:15:00.236 "seek_data": false, 00:15:00.236 "copy": true, 00:15:00.236 "nvme_iov_md": false 00:15:00.236 }, 00:15:00.236 "memory_domains": [ 00:15:00.236 { 00:15:00.236 "dma_device_id": "system", 00:15:00.236 "dma_device_type": 1 00:15:00.236 }, 00:15:00.236 { 00:15:00.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.236 "dma_device_type": 2 00:15:00.236 } 00:15:00.236 ], 00:15:00.236 "driver_specific": {} 00:15:00.236 } 00:15:00.236 ] 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.236 "name": "Existed_Raid", 00:15:00.236 "uuid": "1c580e4b-683b-4299-8f02-1b82ec4b1db6", 00:15:00.236 "strip_size_kb": 64, 00:15:00.236 "state": "online", 00:15:00.236 "raid_level": "concat", 00:15:00.236 "superblock": true, 00:15:00.236 "num_base_bdevs": 4, 00:15:00.236 "num_base_bdevs_discovered": 4, 00:15:00.236 "num_base_bdevs_operational": 4, 00:15:00.236 "base_bdevs_list": [ 00:15:00.236 { 00:15:00.236 "name": "NewBaseBdev", 00:15:00.236 "uuid": "542fe389-8eaa-43fa-af2b-46bbe0892d1f", 00:15:00.236 "is_configured": true, 00:15:00.236 "data_offset": 2048, 00:15:00.236 "data_size": 63488 00:15:00.236 }, 00:15:00.236 { 00:15:00.236 "name": "BaseBdev2", 00:15:00.236 "uuid": "f4f55f1f-4706-43ad-a3ba-f42c420cc13a", 00:15:00.236 "is_configured": true, 00:15:00.236 "data_offset": 2048, 00:15:00.236 "data_size": 63488 00:15:00.236 }, 00:15:00.236 { 00:15:00.236 "name": "BaseBdev3", 00:15:00.236 "uuid": "bad11f61-bed6-465d-a028-6fcaea0e6b48", 00:15:00.236 "is_configured": true, 00:15:00.236 "data_offset": 2048, 00:15:00.236 "data_size": 63488 00:15:00.236 }, 00:15:00.236 { 00:15:00.236 "name": "BaseBdev4", 00:15:00.236 "uuid": "b90a515a-4bf4-441b-b736-b8bba3245cd5", 00:15:00.236 "is_configured": true, 00:15:00.236 "data_offset": 2048, 00:15:00.236 "data_size": 63488 00:15:00.236 } 00:15:00.236 ] 00:15:00.236 }' 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.236 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.803 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:00.803 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:00.803 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:00.803 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:00.803 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:00.803 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:00.803 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:00.803 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.803 07:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.803 07:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:00.803 [2024-11-20 07:11:58.002673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.803 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.803 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:00.803 "name": "Existed_Raid", 00:15:00.803 "aliases": [ 00:15:00.803 "1c580e4b-683b-4299-8f02-1b82ec4b1db6" 00:15:00.803 ], 00:15:00.803 "product_name": "Raid Volume", 00:15:00.803 "block_size": 512, 00:15:00.803 "num_blocks": 253952, 00:15:00.803 "uuid": "1c580e4b-683b-4299-8f02-1b82ec4b1db6", 00:15:00.803 "assigned_rate_limits": { 00:15:00.803 "rw_ios_per_sec": 0, 00:15:00.803 "rw_mbytes_per_sec": 0, 00:15:00.803 "r_mbytes_per_sec": 0, 00:15:00.803 "w_mbytes_per_sec": 0 00:15:00.803 }, 00:15:00.803 "claimed": false, 00:15:00.803 "zoned": false, 00:15:00.803 "supported_io_types": { 00:15:00.803 "read": true, 00:15:00.803 "write": true, 00:15:00.803 "unmap": true, 00:15:00.803 "flush": true, 00:15:00.803 "reset": true, 00:15:00.803 "nvme_admin": false, 00:15:00.803 "nvme_io": false, 00:15:00.803 "nvme_io_md": false, 00:15:00.803 "write_zeroes": true, 00:15:00.803 "zcopy": false, 00:15:00.803 "get_zone_info": false, 00:15:00.803 "zone_management": false, 00:15:00.803 "zone_append": false, 00:15:00.803 "compare": false, 00:15:00.803 "compare_and_write": false, 00:15:00.803 "abort": false, 00:15:00.803 "seek_hole": false, 00:15:00.803 "seek_data": false, 00:15:00.803 "copy": false, 00:15:00.803 "nvme_iov_md": false 00:15:00.803 }, 00:15:00.803 "memory_domains": [ 00:15:00.803 { 00:15:00.803 "dma_device_id": "system", 00:15:00.803 "dma_device_type": 1 00:15:00.803 }, 00:15:00.803 { 00:15:00.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.803 "dma_device_type": 2 00:15:00.803 }, 00:15:00.803 { 00:15:00.803 "dma_device_id": "system", 00:15:00.803 "dma_device_type": 1 00:15:00.803 }, 00:15:00.803 { 00:15:00.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.803 "dma_device_type": 2 00:15:00.803 }, 00:15:00.803 { 00:15:00.803 "dma_device_id": "system", 00:15:00.803 "dma_device_type": 1 00:15:00.803 }, 00:15:00.803 { 00:15:00.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.803 "dma_device_type": 2 00:15:00.803 }, 00:15:00.803 { 00:15:00.803 "dma_device_id": "system", 00:15:00.803 "dma_device_type": 1 00:15:00.803 }, 00:15:00.803 { 00:15:00.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.803 "dma_device_type": 2 00:15:00.803 } 00:15:00.803 ], 00:15:00.803 "driver_specific": { 00:15:00.803 "raid": { 00:15:00.803 "uuid": "1c580e4b-683b-4299-8f02-1b82ec4b1db6", 00:15:00.803 "strip_size_kb": 64, 00:15:00.803 "state": "online", 00:15:00.803 "raid_level": "concat", 00:15:00.803 "superblock": true, 00:15:00.803 "num_base_bdevs": 4, 00:15:00.803 "num_base_bdevs_discovered": 4, 00:15:00.803 "num_base_bdevs_operational": 4, 00:15:00.803 "base_bdevs_list": [ 00:15:00.803 { 00:15:00.803 "name": "NewBaseBdev", 00:15:00.803 "uuid": "542fe389-8eaa-43fa-af2b-46bbe0892d1f", 00:15:00.803 "is_configured": true, 00:15:00.803 "data_offset": 2048, 00:15:00.803 "data_size": 63488 00:15:00.803 }, 00:15:00.803 { 00:15:00.803 "name": "BaseBdev2", 00:15:00.803 "uuid": "f4f55f1f-4706-43ad-a3ba-f42c420cc13a", 00:15:00.803 "is_configured": true, 00:15:00.803 "data_offset": 2048, 00:15:00.803 "data_size": 63488 00:15:00.803 }, 00:15:00.803 { 00:15:00.803 "name": "BaseBdev3", 00:15:00.803 "uuid": "bad11f61-bed6-465d-a028-6fcaea0e6b48", 00:15:00.803 "is_configured": true, 00:15:00.803 "data_offset": 2048, 00:15:00.803 "data_size": 63488 00:15:00.803 }, 00:15:00.803 { 00:15:00.803 "name": "BaseBdev4", 00:15:00.803 "uuid": "b90a515a-4bf4-441b-b736-b8bba3245cd5", 00:15:00.803 "is_configured": true, 00:15:00.803 "data_offset": 2048, 00:15:00.803 "data_size": 63488 00:15:00.803 } 00:15:00.803 ] 00:15:00.803 } 00:15:00.803 } 00:15:00.803 }' 00:15:00.803 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:00.803 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:00.803 BaseBdev2 00:15:00.803 BaseBdev3 00:15:00.803 BaseBdev4' 00:15:00.803 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.063 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.322 [2024-11-20 07:11:58.386352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.322 [2024-11-20 07:11:58.386568] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.322 [2024-11-20 07:11:58.386789] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.322 [2024-11-20 07:11:58.386908] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.322 [2024-11-20 07:11:58.386928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:01.322 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.322 07:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71997 00:15:01.322 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71997 ']' 00:15:01.322 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71997 00:15:01.322 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:01.322 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.322 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71997 00:15:01.322 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:01.322 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:01.322 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71997' 00:15:01.322 killing process with pid 71997 00:15:01.322 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71997 00:15:01.322 [2024-11-20 07:11:58.428596] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.322 07:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71997 00:15:01.581 [2024-11-20 07:11:58.782853] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.523 07:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:02.523 00:15:02.523 real 0m12.854s 00:15:02.523 user 0m21.204s 00:15:02.523 sys 0m1.925s 00:15:02.523 07:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.523 07:11:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.523 ************************************ 00:15:02.523 END TEST raid_state_function_test_sb 00:15:02.523 ************************************ 00:15:02.794 07:11:59 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:15:02.794 07:11:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:02.794 07:11:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.794 07:11:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:02.794 ************************************ 00:15:02.794 START TEST raid_superblock_test 00:15:02.794 ************************************ 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72680 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72680 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72680 ']' 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.794 07:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.794 [2024-11-20 07:12:00.010950] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:15:02.795 [2024-11-20 07:12:00.011210] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72680 ] 00:15:03.053 [2024-11-20 07:12:00.194654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.053 [2024-11-20 07:12:00.319703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.312 [2024-11-20 07:12:00.521641] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.312 [2024-11-20 07:12:00.521716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.880 malloc1 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.880 [2024-11-20 07:12:00.970683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:03.880 [2024-11-20 07:12:00.971007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.880 [2024-11-20 07:12:00.971171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:03.880 [2024-11-20 07:12:00.971294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.880 [2024-11-20 07:12:00.974250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.880 [2024-11-20 07:12:00.974422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:03.880 pt1 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:03.880 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:03.881 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:03.881 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:03.881 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:03.881 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:03.881 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:03.881 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:03.881 07:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:03.881 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.881 07:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.881 malloc2 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.881 [2024-11-20 07:12:01.026992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.881 [2024-11-20 07:12:01.027303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.881 [2024-11-20 07:12:01.027382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:03.881 [2024-11-20 07:12:01.027543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.881 [2024-11-20 07:12:01.030375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.881 [2024-11-20 07:12:01.030529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.881 pt2 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.881 malloc3 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.881 [2024-11-20 07:12:01.096185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:03.881 [2024-11-20 07:12:01.096483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.881 [2024-11-20 07:12:01.096567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:03.881 [2024-11-20 07:12:01.096817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.881 [2024-11-20 07:12:01.099805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.881 pt3 00:15:03.881 [2024-11-20 07:12:01.099981] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.881 malloc4 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.881 [2024-11-20 07:12:01.149220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:03.881 [2024-11-20 07:12:01.149505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.881 [2024-11-20 07:12:01.149549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:03.881 [2024-11-20 07:12:01.149565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.881 [2024-11-20 07:12:01.152456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.881 pt4 00:15:03.881 [2024-11-20 07:12:01.152674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.881 [2024-11-20 07:12:01.157408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:03.881 [2024-11-20 07:12:01.159812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:03.881 [2024-11-20 07:12:01.159957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:03.881 [2024-11-20 07:12:01.160049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:03.881 [2024-11-20 07:12:01.160303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:03.881 [2024-11-20 07:12:01.160321] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:03.881 [2024-11-20 07:12:01.160628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:03.881 [2024-11-20 07:12:01.160848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:03.881 [2024-11-20 07:12:01.160868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:03.881 [2024-11-20 07:12:01.161064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.881 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.882 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.140 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.140 "name": "raid_bdev1", 00:15:04.140 "uuid": "c7ff9af0-f47f-47b3-9b48-038252523f11", 00:15:04.140 "strip_size_kb": 64, 00:15:04.140 "state": "online", 00:15:04.140 "raid_level": "concat", 00:15:04.140 "superblock": true, 00:15:04.140 "num_base_bdevs": 4, 00:15:04.140 "num_base_bdevs_discovered": 4, 00:15:04.140 "num_base_bdevs_operational": 4, 00:15:04.140 "base_bdevs_list": [ 00:15:04.140 { 00:15:04.140 "name": "pt1", 00:15:04.140 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.140 "is_configured": true, 00:15:04.140 "data_offset": 2048, 00:15:04.140 "data_size": 63488 00:15:04.140 }, 00:15:04.140 { 00:15:04.140 "name": "pt2", 00:15:04.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.140 "is_configured": true, 00:15:04.140 "data_offset": 2048, 00:15:04.140 "data_size": 63488 00:15:04.140 }, 00:15:04.140 { 00:15:04.140 "name": "pt3", 00:15:04.140 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.140 "is_configured": true, 00:15:04.140 "data_offset": 2048, 00:15:04.140 "data_size": 63488 00:15:04.140 }, 00:15:04.140 { 00:15:04.140 "name": "pt4", 00:15:04.140 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.140 "is_configured": true, 00:15:04.140 "data_offset": 2048, 00:15:04.140 "data_size": 63488 00:15:04.140 } 00:15:04.140 ] 00:15:04.140 }' 00:15:04.140 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.140 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.398 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:04.398 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:04.398 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:04.398 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:04.398 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:04.399 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:04.399 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.399 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:04.399 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.399 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.399 [2024-11-20 07:12:01.681926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.399 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:04.658 "name": "raid_bdev1", 00:15:04.658 "aliases": [ 00:15:04.658 "c7ff9af0-f47f-47b3-9b48-038252523f11" 00:15:04.658 ], 00:15:04.658 "product_name": "Raid Volume", 00:15:04.658 "block_size": 512, 00:15:04.658 "num_blocks": 253952, 00:15:04.658 "uuid": "c7ff9af0-f47f-47b3-9b48-038252523f11", 00:15:04.658 "assigned_rate_limits": { 00:15:04.658 "rw_ios_per_sec": 0, 00:15:04.658 "rw_mbytes_per_sec": 0, 00:15:04.658 "r_mbytes_per_sec": 0, 00:15:04.658 "w_mbytes_per_sec": 0 00:15:04.658 }, 00:15:04.658 "claimed": false, 00:15:04.658 "zoned": false, 00:15:04.658 "supported_io_types": { 00:15:04.658 "read": true, 00:15:04.658 "write": true, 00:15:04.658 "unmap": true, 00:15:04.658 "flush": true, 00:15:04.658 "reset": true, 00:15:04.658 "nvme_admin": false, 00:15:04.658 "nvme_io": false, 00:15:04.658 "nvme_io_md": false, 00:15:04.658 "write_zeroes": true, 00:15:04.658 "zcopy": false, 00:15:04.658 "get_zone_info": false, 00:15:04.658 "zone_management": false, 00:15:04.658 "zone_append": false, 00:15:04.658 "compare": false, 00:15:04.658 "compare_and_write": false, 00:15:04.658 "abort": false, 00:15:04.658 "seek_hole": false, 00:15:04.658 "seek_data": false, 00:15:04.658 "copy": false, 00:15:04.658 "nvme_iov_md": false 00:15:04.658 }, 00:15:04.658 "memory_domains": [ 00:15:04.658 { 00:15:04.658 "dma_device_id": "system", 00:15:04.658 "dma_device_type": 1 00:15:04.658 }, 00:15:04.658 { 00:15:04.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.658 "dma_device_type": 2 00:15:04.658 }, 00:15:04.658 { 00:15:04.658 "dma_device_id": "system", 00:15:04.658 "dma_device_type": 1 00:15:04.658 }, 00:15:04.658 { 00:15:04.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.658 "dma_device_type": 2 00:15:04.658 }, 00:15:04.658 { 00:15:04.658 "dma_device_id": "system", 00:15:04.658 "dma_device_type": 1 00:15:04.658 }, 00:15:04.658 { 00:15:04.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.658 "dma_device_type": 2 00:15:04.658 }, 00:15:04.658 { 00:15:04.658 "dma_device_id": "system", 00:15:04.658 "dma_device_type": 1 00:15:04.658 }, 00:15:04.658 { 00:15:04.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.658 "dma_device_type": 2 00:15:04.658 } 00:15:04.658 ], 00:15:04.658 "driver_specific": { 00:15:04.658 "raid": { 00:15:04.658 "uuid": "c7ff9af0-f47f-47b3-9b48-038252523f11", 00:15:04.658 "strip_size_kb": 64, 00:15:04.658 "state": "online", 00:15:04.658 "raid_level": "concat", 00:15:04.658 "superblock": true, 00:15:04.658 "num_base_bdevs": 4, 00:15:04.658 "num_base_bdevs_discovered": 4, 00:15:04.658 "num_base_bdevs_operational": 4, 00:15:04.658 "base_bdevs_list": [ 00:15:04.658 { 00:15:04.658 "name": "pt1", 00:15:04.658 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.658 "is_configured": true, 00:15:04.658 "data_offset": 2048, 00:15:04.658 "data_size": 63488 00:15:04.658 }, 00:15:04.658 { 00:15:04.658 "name": "pt2", 00:15:04.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.658 "is_configured": true, 00:15:04.658 "data_offset": 2048, 00:15:04.658 "data_size": 63488 00:15:04.658 }, 00:15:04.658 { 00:15:04.658 "name": "pt3", 00:15:04.658 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:04.658 "is_configured": true, 00:15:04.658 "data_offset": 2048, 00:15:04.658 "data_size": 63488 00:15:04.658 }, 00:15:04.658 { 00:15:04.658 "name": "pt4", 00:15:04.658 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:04.658 "is_configured": true, 00:15:04.658 "data_offset": 2048, 00:15:04.658 "data_size": 63488 00:15:04.658 } 00:15:04.658 ] 00:15:04.658 } 00:15:04.658 } 00:15:04.658 }' 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:04.658 pt2 00:15:04.658 pt3 00:15:04.658 pt4' 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.658 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.918 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.918 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.918 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:04.918 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:04.918 07:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:04.918 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.918 07:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.918 [2024-11-20 07:12:02.053975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c7ff9af0-f47f-47b3-9b48-038252523f11 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c7ff9af0-f47f-47b3-9b48-038252523f11 ']' 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.918 [2024-11-20 07:12:02.105630] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.918 [2024-11-20 07:12:02.105809] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.918 [2024-11-20 07:12:02.106057] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.918 [2024-11-20 07:12:02.106160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.918 [2024-11-20 07:12:02.106184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:04.918 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.177 [2024-11-20 07:12:02.265711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:05.177 [2024-11-20 07:12:02.268453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:05.177 [2024-11-20 07:12:02.268703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:05.177 [2024-11-20 07:12:02.268774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:05.177 [2024-11-20 07:12:02.268851] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:05.177 [2024-11-20 07:12:02.268944] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:05.177 [2024-11-20 07:12:02.268981] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:05.177 [2024-11-20 07:12:02.269013] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:05.177 [2024-11-20 07:12:02.269035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.177 [2024-11-20 07:12:02.269050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:05.177 request: 00:15:05.177 { 00:15:05.177 "name": "raid_bdev1", 00:15:05.177 "raid_level": "concat", 00:15:05.177 "base_bdevs": [ 00:15:05.177 "malloc1", 00:15:05.177 "malloc2", 00:15:05.177 "malloc3", 00:15:05.177 "malloc4" 00:15:05.177 ], 00:15:05.177 "strip_size_kb": 64, 00:15:05.177 "superblock": false, 00:15:05.177 "method": "bdev_raid_create", 00:15:05.177 "req_id": 1 00:15:05.177 } 00:15:05.177 Got JSON-RPC error response 00:15:05.177 response: 00:15:05.177 { 00:15:05.177 "code": -17, 00:15:05.177 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:05.177 } 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:05.177 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.178 [2024-11-20 07:12:02.333751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:05.178 [2024-11-20 07:12:02.333953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.178 [2024-11-20 07:12:02.334106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:05.178 [2024-11-20 07:12:02.334257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.178 [2024-11-20 07:12:02.337199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.178 [2024-11-20 07:12:02.337365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:05.178 [2024-11-20 07:12:02.337569] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:05.178 [2024-11-20 07:12:02.337750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:05.178 pt1 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.178 "name": "raid_bdev1", 00:15:05.178 "uuid": "c7ff9af0-f47f-47b3-9b48-038252523f11", 00:15:05.178 "strip_size_kb": 64, 00:15:05.178 "state": "configuring", 00:15:05.178 "raid_level": "concat", 00:15:05.178 "superblock": true, 00:15:05.178 "num_base_bdevs": 4, 00:15:05.178 "num_base_bdevs_discovered": 1, 00:15:05.178 "num_base_bdevs_operational": 4, 00:15:05.178 "base_bdevs_list": [ 00:15:05.178 { 00:15:05.178 "name": "pt1", 00:15:05.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.178 "is_configured": true, 00:15:05.178 "data_offset": 2048, 00:15:05.178 "data_size": 63488 00:15:05.178 }, 00:15:05.178 { 00:15:05.178 "name": null, 00:15:05.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.178 "is_configured": false, 00:15:05.178 "data_offset": 2048, 00:15:05.178 "data_size": 63488 00:15:05.178 }, 00:15:05.178 { 00:15:05.178 "name": null, 00:15:05.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.178 "is_configured": false, 00:15:05.178 "data_offset": 2048, 00:15:05.178 "data_size": 63488 00:15:05.178 }, 00:15:05.178 { 00:15:05.178 "name": null, 00:15:05.178 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:05.178 "is_configured": false, 00:15:05.178 "data_offset": 2048, 00:15:05.178 "data_size": 63488 00:15:05.178 } 00:15:05.178 ] 00:15:05.178 }' 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.178 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.777 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:05.777 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:05.777 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.777 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.777 [2024-11-20 07:12:02.862337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:05.777 [2024-11-20 07:12:02.862592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.777 [2024-11-20 07:12:02.862632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:05.777 [2024-11-20 07:12:02.862652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.777 [2024-11-20 07:12:02.863226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.777 [2024-11-20 07:12:02.863279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:05.777 [2024-11-20 07:12:02.863396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:05.778 [2024-11-20 07:12:02.863439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:05.778 pt2 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.778 [2024-11-20 07:12:02.870299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.778 "name": "raid_bdev1", 00:15:05.778 "uuid": "c7ff9af0-f47f-47b3-9b48-038252523f11", 00:15:05.778 "strip_size_kb": 64, 00:15:05.778 "state": "configuring", 00:15:05.778 "raid_level": "concat", 00:15:05.778 "superblock": true, 00:15:05.778 "num_base_bdevs": 4, 00:15:05.778 "num_base_bdevs_discovered": 1, 00:15:05.778 "num_base_bdevs_operational": 4, 00:15:05.778 "base_bdevs_list": [ 00:15:05.778 { 00:15:05.778 "name": "pt1", 00:15:05.778 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.778 "is_configured": true, 00:15:05.778 "data_offset": 2048, 00:15:05.778 "data_size": 63488 00:15:05.778 }, 00:15:05.778 { 00:15:05.778 "name": null, 00:15:05.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.778 "is_configured": false, 00:15:05.778 "data_offset": 0, 00:15:05.778 "data_size": 63488 00:15:05.778 }, 00:15:05.778 { 00:15:05.778 "name": null, 00:15:05.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.778 "is_configured": false, 00:15:05.778 "data_offset": 2048, 00:15:05.778 "data_size": 63488 00:15:05.778 }, 00:15:05.778 { 00:15:05.778 "name": null, 00:15:05.778 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:05.778 "is_configured": false, 00:15:05.778 "data_offset": 2048, 00:15:05.778 "data_size": 63488 00:15:05.778 } 00:15:05.778 ] 00:15:05.778 }' 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.778 07:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.345 [2024-11-20 07:12:03.398446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:06.345 [2024-11-20 07:12:03.398652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.345 [2024-11-20 07:12:03.398729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:06.345 [2024-11-20 07:12:03.398857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.345 [2024-11-20 07:12:03.399470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.345 [2024-11-20 07:12:03.399512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:06.345 [2024-11-20 07:12:03.399621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:06.345 [2024-11-20 07:12:03.399651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:06.345 pt2 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.345 [2024-11-20 07:12:03.406413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:06.345 [2024-11-20 07:12:03.406590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.345 [2024-11-20 07:12:03.406675] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:06.345 [2024-11-20 07:12:03.406904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.345 [2024-11-20 07:12:03.407397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.345 [2024-11-20 07:12:03.407547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:06.345 [2024-11-20 07:12:03.407763] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:06.345 [2024-11-20 07:12:03.407962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:06.345 pt3 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.345 [2024-11-20 07:12:03.414394] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:06.345 [2024-11-20 07:12:03.414562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.345 [2024-11-20 07:12:03.414633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:06.345 [2024-11-20 07:12:03.414795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.345 [2024-11-20 07:12:03.415285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.345 [2024-11-20 07:12:03.415319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:06.345 [2024-11-20 07:12:03.415402] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:06.345 [2024-11-20 07:12:03.415429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:06.345 [2024-11-20 07:12:03.415592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:06.345 [2024-11-20 07:12:03.415607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:06.345 [2024-11-20 07:12:03.415930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:06.345 [2024-11-20 07:12:03.416124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:06.345 [2024-11-20 07:12:03.416153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:06.345 [2024-11-20 07:12:03.416309] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.345 pt4 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.345 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.345 "name": "raid_bdev1", 00:15:06.345 "uuid": "c7ff9af0-f47f-47b3-9b48-038252523f11", 00:15:06.345 "strip_size_kb": 64, 00:15:06.345 "state": "online", 00:15:06.345 "raid_level": "concat", 00:15:06.345 "superblock": true, 00:15:06.345 "num_base_bdevs": 4, 00:15:06.345 "num_base_bdevs_discovered": 4, 00:15:06.345 "num_base_bdevs_operational": 4, 00:15:06.345 "base_bdevs_list": [ 00:15:06.345 { 00:15:06.345 "name": "pt1", 00:15:06.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.345 "is_configured": true, 00:15:06.345 "data_offset": 2048, 00:15:06.345 "data_size": 63488 00:15:06.345 }, 00:15:06.345 { 00:15:06.345 "name": "pt2", 00:15:06.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.345 "is_configured": true, 00:15:06.345 "data_offset": 2048, 00:15:06.345 "data_size": 63488 00:15:06.345 }, 00:15:06.345 { 00:15:06.345 "name": "pt3", 00:15:06.345 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.345 "is_configured": true, 00:15:06.345 "data_offset": 2048, 00:15:06.346 "data_size": 63488 00:15:06.346 }, 00:15:06.346 { 00:15:06.346 "name": "pt4", 00:15:06.346 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:06.346 "is_configured": true, 00:15:06.346 "data_offset": 2048, 00:15:06.346 "data_size": 63488 00:15:06.346 } 00:15:06.346 ] 00:15:06.346 }' 00:15:06.346 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.346 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.915 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:06.915 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:06.915 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:06.915 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:06.915 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:06.915 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:06.915 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:06.915 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:06.915 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.915 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.915 [2024-11-20 07:12:03.939006] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.915 07:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.915 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:06.915 "name": "raid_bdev1", 00:15:06.915 "aliases": [ 00:15:06.915 "c7ff9af0-f47f-47b3-9b48-038252523f11" 00:15:06.915 ], 00:15:06.915 "product_name": "Raid Volume", 00:15:06.915 "block_size": 512, 00:15:06.915 "num_blocks": 253952, 00:15:06.915 "uuid": "c7ff9af0-f47f-47b3-9b48-038252523f11", 00:15:06.915 "assigned_rate_limits": { 00:15:06.915 "rw_ios_per_sec": 0, 00:15:06.915 "rw_mbytes_per_sec": 0, 00:15:06.915 "r_mbytes_per_sec": 0, 00:15:06.915 "w_mbytes_per_sec": 0 00:15:06.915 }, 00:15:06.915 "claimed": false, 00:15:06.915 "zoned": false, 00:15:06.915 "supported_io_types": { 00:15:06.915 "read": true, 00:15:06.915 "write": true, 00:15:06.915 "unmap": true, 00:15:06.915 "flush": true, 00:15:06.915 "reset": true, 00:15:06.915 "nvme_admin": false, 00:15:06.915 "nvme_io": false, 00:15:06.915 "nvme_io_md": false, 00:15:06.915 "write_zeroes": true, 00:15:06.915 "zcopy": false, 00:15:06.915 "get_zone_info": false, 00:15:06.915 "zone_management": false, 00:15:06.915 "zone_append": false, 00:15:06.915 "compare": false, 00:15:06.915 "compare_and_write": false, 00:15:06.915 "abort": false, 00:15:06.915 "seek_hole": false, 00:15:06.915 "seek_data": false, 00:15:06.915 "copy": false, 00:15:06.915 "nvme_iov_md": false 00:15:06.915 }, 00:15:06.915 "memory_domains": [ 00:15:06.915 { 00:15:06.915 "dma_device_id": "system", 00:15:06.915 "dma_device_type": 1 00:15:06.915 }, 00:15:06.915 { 00:15:06.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.915 "dma_device_type": 2 00:15:06.915 }, 00:15:06.915 { 00:15:06.915 "dma_device_id": "system", 00:15:06.915 "dma_device_type": 1 00:15:06.915 }, 00:15:06.915 { 00:15:06.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.915 "dma_device_type": 2 00:15:06.915 }, 00:15:06.915 { 00:15:06.915 "dma_device_id": "system", 00:15:06.915 "dma_device_type": 1 00:15:06.915 }, 00:15:06.915 { 00:15:06.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.915 "dma_device_type": 2 00:15:06.915 }, 00:15:06.915 { 00:15:06.915 "dma_device_id": "system", 00:15:06.915 "dma_device_type": 1 00:15:06.915 }, 00:15:06.915 { 00:15:06.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.915 "dma_device_type": 2 00:15:06.915 } 00:15:06.915 ], 00:15:06.915 "driver_specific": { 00:15:06.915 "raid": { 00:15:06.915 "uuid": "c7ff9af0-f47f-47b3-9b48-038252523f11", 00:15:06.915 "strip_size_kb": 64, 00:15:06.915 "state": "online", 00:15:06.915 "raid_level": "concat", 00:15:06.915 "superblock": true, 00:15:06.915 "num_base_bdevs": 4, 00:15:06.915 "num_base_bdevs_discovered": 4, 00:15:06.915 "num_base_bdevs_operational": 4, 00:15:06.915 "base_bdevs_list": [ 00:15:06.915 { 00:15:06.915 "name": "pt1", 00:15:06.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.915 "is_configured": true, 00:15:06.915 "data_offset": 2048, 00:15:06.915 "data_size": 63488 00:15:06.915 }, 00:15:06.915 { 00:15:06.915 "name": "pt2", 00:15:06.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.915 "is_configured": true, 00:15:06.915 "data_offset": 2048, 00:15:06.915 "data_size": 63488 00:15:06.915 }, 00:15:06.915 { 00:15:06.915 "name": "pt3", 00:15:06.915 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.915 "is_configured": true, 00:15:06.915 "data_offset": 2048, 00:15:06.915 "data_size": 63488 00:15:06.915 }, 00:15:06.915 { 00:15:06.915 "name": "pt4", 00:15:06.915 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:06.915 "is_configured": true, 00:15:06.915 "data_offset": 2048, 00:15:06.915 "data_size": 63488 00:15:06.915 } 00:15:06.915 ] 00:15:06.915 } 00:15:06.915 } 00:15:06.915 }' 00:15:06.915 07:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:06.915 pt2 00:15:06.915 pt3 00:15:06.915 pt4' 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.915 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.175 [2024-11-20 07:12:04.315031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c7ff9af0-f47f-47b3-9b48-038252523f11 '!=' c7ff9af0-f47f-47b3-9b48-038252523f11 ']' 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72680 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72680 ']' 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72680 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72680 00:15:07.175 killing process with pid 72680 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72680' 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72680 00:15:07.175 [2024-11-20 07:12:04.395937] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:07.175 [2024-11-20 07:12:04.396036] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.175 07:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72680 00:15:07.175 [2024-11-20 07:12:04.396131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.175 [2024-11-20 07:12:04.396147] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:07.433 [2024-11-20 07:12:04.744620] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:08.812 07:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:08.812 00:15:08.812 real 0m5.877s 00:15:08.812 user 0m8.833s 00:15:08.812 sys 0m0.893s 00:15:08.812 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.812 07:12:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.812 ************************************ 00:15:08.812 END TEST raid_superblock_test 00:15:08.812 ************************************ 00:15:08.812 07:12:05 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:08.812 07:12:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:08.812 07:12:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.812 07:12:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:08.812 ************************************ 00:15:08.812 START TEST raid_read_error_test 00:15:08.812 ************************************ 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6ptewpN1cR 00:15:08.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72950 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72950 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72950 ']' 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.812 07:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.813 07:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.813 07:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.813 [2024-11-20 07:12:05.932932] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:15:08.813 [2024-11-20 07:12:05.933332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72950 ] 00:15:08.813 [2024-11-20 07:12:06.118537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.071 [2024-11-20 07:12:06.248206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.358 [2024-11-20 07:12:06.452634] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.358 [2024-11-20 07:12:06.452824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:09.618 07:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.618 07:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:09.618 07:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:09.618 07:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:09.618 07:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.618 07:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.877 BaseBdev1_malloc 00:15:09.877 07:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.877 07:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:09.877 07:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.877 07:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.877 true 00:15:09.877 07:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.877 07:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:09.877 07:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.877 07:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.877 [2024-11-20 07:12:06.987639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:09.877 [2024-11-20 07:12:06.987858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.877 [2024-11-20 07:12:06.987912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:09.877 [2024-11-20 07:12:06.987933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.877 [2024-11-20 07:12:06.990741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.877 [2024-11-20 07:12:06.990792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:09.877 BaseBdev1 00:15:09.877 07:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.877 07:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:09.877 07:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:09.877 07:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.877 07:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.877 BaseBdev2_malloc 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.877 true 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.877 [2024-11-20 07:12:07.048115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:09.877 [2024-11-20 07:12:07.048216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.877 [2024-11-20 07:12:07.048249] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:09.877 [2024-11-20 07:12:07.048268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.877 [2024-11-20 07:12:07.051022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.877 [2024-11-20 07:12:07.051072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:09.877 BaseBdev2 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.877 BaseBdev3_malloc 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.877 true 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.877 [2024-11-20 07:12:07.118563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:09.877 [2024-11-20 07:12:07.118772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.877 [2024-11-20 07:12:07.118808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:09.877 [2024-11-20 07:12:07.118829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.877 [2024-11-20 07:12:07.121684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.877 [2024-11-20 07:12:07.121734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:09.877 BaseBdev3 00:15:09.877 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.878 BaseBdev4_malloc 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.878 true 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.878 [2024-11-20 07:12:07.178577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:09.878 [2024-11-20 07:12:07.178658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.878 [2024-11-20 07:12:07.178684] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:09.878 [2024-11-20 07:12:07.178703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.878 [2024-11-20 07:12:07.181562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.878 [2024-11-20 07:12:07.181635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:09.878 BaseBdev4 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.878 [2024-11-20 07:12:07.186651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:09.878 [2024-11-20 07:12:07.189219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.878 [2024-11-20 07:12:07.189340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.878 [2024-11-20 07:12:07.189439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:09.878 [2024-11-20 07:12:07.189738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:09.878 [2024-11-20 07:12:07.189760] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:09.878 [2024-11-20 07:12:07.190110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:09.878 [2024-11-20 07:12:07.190346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:09.878 [2024-11-20 07:12:07.190366] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:09.878 [2024-11-20 07:12:07.190600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.878 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.136 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.136 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.136 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.136 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.136 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.136 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.136 "name": "raid_bdev1", 00:15:10.136 "uuid": "bc0afaa6-37f0-4847-a924-91e0623bf024", 00:15:10.136 "strip_size_kb": 64, 00:15:10.136 "state": "online", 00:15:10.136 "raid_level": "concat", 00:15:10.136 "superblock": true, 00:15:10.136 "num_base_bdevs": 4, 00:15:10.136 "num_base_bdevs_discovered": 4, 00:15:10.136 "num_base_bdevs_operational": 4, 00:15:10.136 "base_bdevs_list": [ 00:15:10.136 { 00:15:10.136 "name": "BaseBdev1", 00:15:10.136 "uuid": "52d0ce84-657e-54e6-a2f0-6ab92dc231b8", 00:15:10.136 "is_configured": true, 00:15:10.136 "data_offset": 2048, 00:15:10.136 "data_size": 63488 00:15:10.136 }, 00:15:10.136 { 00:15:10.136 "name": "BaseBdev2", 00:15:10.136 "uuid": "85eae749-65a6-554f-a537-a168507e0ff9", 00:15:10.136 "is_configured": true, 00:15:10.136 "data_offset": 2048, 00:15:10.136 "data_size": 63488 00:15:10.136 }, 00:15:10.136 { 00:15:10.136 "name": "BaseBdev3", 00:15:10.136 "uuid": "beb35f4b-ab9f-5dc8-bb74-0bb64d991059", 00:15:10.137 "is_configured": true, 00:15:10.137 "data_offset": 2048, 00:15:10.137 "data_size": 63488 00:15:10.137 }, 00:15:10.137 { 00:15:10.137 "name": "BaseBdev4", 00:15:10.137 "uuid": "49a1707a-59ce-5258-b1cc-172632f3046f", 00:15:10.137 "is_configured": true, 00:15:10.137 "data_offset": 2048, 00:15:10.137 "data_size": 63488 00:15:10.137 } 00:15:10.137 ] 00:15:10.137 }' 00:15:10.137 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.137 07:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.395 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:10.396 07:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:10.654 [2024-11-20 07:12:07.784228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.591 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.591 "name": "raid_bdev1", 00:15:11.591 "uuid": "bc0afaa6-37f0-4847-a924-91e0623bf024", 00:15:11.591 "strip_size_kb": 64, 00:15:11.591 "state": "online", 00:15:11.591 "raid_level": "concat", 00:15:11.591 "superblock": true, 00:15:11.591 "num_base_bdevs": 4, 00:15:11.591 "num_base_bdevs_discovered": 4, 00:15:11.591 "num_base_bdevs_operational": 4, 00:15:11.591 "base_bdevs_list": [ 00:15:11.591 { 00:15:11.591 "name": "BaseBdev1", 00:15:11.591 "uuid": "52d0ce84-657e-54e6-a2f0-6ab92dc231b8", 00:15:11.591 "is_configured": true, 00:15:11.591 "data_offset": 2048, 00:15:11.592 "data_size": 63488 00:15:11.592 }, 00:15:11.592 { 00:15:11.592 "name": "BaseBdev2", 00:15:11.592 "uuid": "85eae749-65a6-554f-a537-a168507e0ff9", 00:15:11.592 "is_configured": true, 00:15:11.592 "data_offset": 2048, 00:15:11.592 "data_size": 63488 00:15:11.592 }, 00:15:11.592 { 00:15:11.592 "name": "BaseBdev3", 00:15:11.592 "uuid": "beb35f4b-ab9f-5dc8-bb74-0bb64d991059", 00:15:11.592 "is_configured": true, 00:15:11.592 "data_offset": 2048, 00:15:11.592 "data_size": 63488 00:15:11.592 }, 00:15:11.592 { 00:15:11.592 "name": "BaseBdev4", 00:15:11.592 "uuid": "49a1707a-59ce-5258-b1cc-172632f3046f", 00:15:11.592 "is_configured": true, 00:15:11.592 "data_offset": 2048, 00:15:11.592 "data_size": 63488 00:15:11.592 } 00:15:11.592 ] 00:15:11.592 }' 00:15:11.592 07:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.592 07:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.159 [2024-11-20 07:12:09.202614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.159 [2024-11-20 07:12:09.202801] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.159 [2024-11-20 07:12:09.206429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.159 [2024-11-20 07:12:09.206635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.159 [2024-11-20 07:12:09.206741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.159 [2024-11-20 07:12:09.206984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.159 { 00:15:12.159 "results": [ 00:15:12.159 { 00:15:12.159 "job": "raid_bdev1", 00:15:12.159 "core_mask": "0x1", 00:15:12.159 "workload": "randrw", 00:15:12.159 "percentage": 50, 00:15:12.159 "status": "finished", 00:15:12.159 "queue_depth": 1, 00:15:12.159 "io_size": 131072, 00:15:12.159 "runtime": 1.415905, 00:15:12.159 "iops": 10975.312609249915, 00:15:12.159 "mibps": 1371.9140761562394, 00:15:12.159 "io_failed": 1, 00:15:12.159 "io_timeout": 0, 00:15:12.159 "avg_latency_us": 127.1916691917567, 00:15:12.159 "min_latency_us": 39.33090909090909, 00:15:12.159 "max_latency_us": 1869.2654545454545 00:15:12.159 } 00:15:12.159 ], 00:15:12.159 "core_count": 1 00:15:12.159 } 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72950 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72950 ']' 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72950 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72950 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.159 killing process with pid 72950 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72950' 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72950 00:15:12.159 [2024-11-20 07:12:09.242768] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.159 07:12:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72950 00:15:12.418 [2024-11-20 07:12:09.536490] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.355 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6ptewpN1cR 00:15:13.356 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:13.356 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:13.356 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:15:13.356 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:13.356 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:13.356 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:13.356 ************************************ 00:15:13.356 END TEST raid_read_error_test 00:15:13.356 ************************************ 00:15:13.356 07:12:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:15:13.356 00:15:13.356 real 0m4.826s 00:15:13.356 user 0m5.898s 00:15:13.356 sys 0m0.606s 00:15:13.356 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.356 07:12:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.621 07:12:10 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:13.621 07:12:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:13.621 07:12:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.621 07:12:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.621 ************************************ 00:15:13.621 START TEST raid_write_error_test 00:15:13.621 ************************************ 00:15:13.621 07:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:15:13.621 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:15:13.621 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:13.621 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.p8ONxOmu7L 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73097 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73097 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73097 ']' 00:15:13.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.622 07:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.622 [2024-11-20 07:12:10.808252] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:15:13.622 [2024-11-20 07:12:10.808436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73097 ] 00:15:13.880 [2024-11-20 07:12:11.001120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.880 [2024-11-20 07:12:11.169968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.138 [2024-11-20 07:12:11.385357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.138 [2024-11-20 07:12:11.385416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.703 BaseBdev1_malloc 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.703 true 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.703 [2024-11-20 07:12:11.892424] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:14.703 [2024-11-20 07:12:11.892668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.703 [2024-11-20 07:12:11.892745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:14.703 [2024-11-20 07:12:11.892980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.703 [2024-11-20 07:12:11.895832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.703 BaseBdev1 00:15:14.703 [2024-11-20 07:12:11.896046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.703 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.704 BaseBdev2_malloc 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.704 true 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.704 [2024-11-20 07:12:11.957951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:14.704 [2024-11-20 07:12:11.958175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.704 [2024-11-20 07:12:11.958246] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:14.704 [2024-11-20 07:12:11.958418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.704 [2024-11-20 07:12:11.961333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.704 [2024-11-20 07:12:11.961386] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:14.704 BaseBdev2 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.704 07:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.704 BaseBdev3_malloc 00:15:14.704 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.704 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:14.704 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.704 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.962 true 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.962 [2024-11-20 07:12:12.030816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:14.962 [2024-11-20 07:12:12.031076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.962 [2024-11-20 07:12:12.031114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:14.962 [2024-11-20 07:12:12.031134] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.962 [2024-11-20 07:12:12.034097] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.962 [2024-11-20 07:12:12.034268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:14.962 BaseBdev3 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.962 BaseBdev4_malloc 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.962 true 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.962 [2024-11-20 07:12:12.091980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:14.962 [2024-11-20 07:12:12.092053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.962 [2024-11-20 07:12:12.092083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:14.962 [2024-11-20 07:12:12.092102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.962 [2024-11-20 07:12:12.094961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.962 [2024-11-20 07:12:12.095025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:14.962 BaseBdev4 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.962 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.962 [2024-11-20 07:12:12.100042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.963 [2024-11-20 07:12:12.102580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.963 [2024-11-20 07:12:12.102693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.963 [2024-11-20 07:12:12.102799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:14.963 [2024-11-20 07:12:12.103125] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:14.963 [2024-11-20 07:12:12.103150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:14.963 [2024-11-20 07:12:12.103468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:14.963 [2024-11-20 07:12:12.103687] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:14.963 [2024-11-20 07:12:12.103706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:14.963 [2024-11-20 07:12:12.103990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.963 "name": "raid_bdev1", 00:15:14.963 "uuid": "19d912cc-63d1-45af-99ad-e45fab6f8a94", 00:15:14.963 "strip_size_kb": 64, 00:15:14.963 "state": "online", 00:15:14.963 "raid_level": "concat", 00:15:14.963 "superblock": true, 00:15:14.963 "num_base_bdevs": 4, 00:15:14.963 "num_base_bdevs_discovered": 4, 00:15:14.963 "num_base_bdevs_operational": 4, 00:15:14.963 "base_bdevs_list": [ 00:15:14.963 { 00:15:14.963 "name": "BaseBdev1", 00:15:14.963 "uuid": "696eac98-c84e-58e4-bc86-1f5367125970", 00:15:14.963 "is_configured": true, 00:15:14.963 "data_offset": 2048, 00:15:14.963 "data_size": 63488 00:15:14.963 }, 00:15:14.963 { 00:15:14.963 "name": "BaseBdev2", 00:15:14.963 "uuid": "f6d34f1f-c993-5b85-ad3e-8bc59a525ae3", 00:15:14.963 "is_configured": true, 00:15:14.963 "data_offset": 2048, 00:15:14.963 "data_size": 63488 00:15:14.963 }, 00:15:14.963 { 00:15:14.963 "name": "BaseBdev3", 00:15:14.963 "uuid": "8f1f8e18-1344-5105-97db-82ece358ae5f", 00:15:14.963 "is_configured": true, 00:15:14.963 "data_offset": 2048, 00:15:14.963 "data_size": 63488 00:15:14.963 }, 00:15:14.963 { 00:15:14.963 "name": "BaseBdev4", 00:15:14.963 "uuid": "0cbe1423-5c3c-5db0-acac-108d6e3ef51b", 00:15:14.963 "is_configured": true, 00:15:14.963 "data_offset": 2048, 00:15:14.963 "data_size": 63488 00:15:14.963 } 00:15:14.963 ] 00:15:14.963 }' 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.963 07:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:15.529 07:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:15.529 [2024-11-20 07:12:12.765720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:16.463 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:16.463 07:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.463 07:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.464 "name": "raid_bdev1", 00:15:16.464 "uuid": "19d912cc-63d1-45af-99ad-e45fab6f8a94", 00:15:16.464 "strip_size_kb": 64, 00:15:16.464 "state": "online", 00:15:16.464 "raid_level": "concat", 00:15:16.464 "superblock": true, 00:15:16.464 "num_base_bdevs": 4, 00:15:16.464 "num_base_bdevs_discovered": 4, 00:15:16.464 "num_base_bdevs_operational": 4, 00:15:16.464 "base_bdevs_list": [ 00:15:16.464 { 00:15:16.464 "name": "BaseBdev1", 00:15:16.464 "uuid": "696eac98-c84e-58e4-bc86-1f5367125970", 00:15:16.464 "is_configured": true, 00:15:16.464 "data_offset": 2048, 00:15:16.464 "data_size": 63488 00:15:16.464 }, 00:15:16.464 { 00:15:16.464 "name": "BaseBdev2", 00:15:16.464 "uuid": "f6d34f1f-c993-5b85-ad3e-8bc59a525ae3", 00:15:16.464 "is_configured": true, 00:15:16.464 "data_offset": 2048, 00:15:16.464 "data_size": 63488 00:15:16.464 }, 00:15:16.464 { 00:15:16.464 "name": "BaseBdev3", 00:15:16.464 "uuid": "8f1f8e18-1344-5105-97db-82ece358ae5f", 00:15:16.464 "is_configured": true, 00:15:16.464 "data_offset": 2048, 00:15:16.464 "data_size": 63488 00:15:16.464 }, 00:15:16.464 { 00:15:16.464 "name": "BaseBdev4", 00:15:16.464 "uuid": "0cbe1423-5c3c-5db0-acac-108d6e3ef51b", 00:15:16.464 "is_configured": true, 00:15:16.464 "data_offset": 2048, 00:15:16.464 "data_size": 63488 00:15:16.464 } 00:15:16.464 ] 00:15:16.464 }' 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.464 07:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.031 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:17.031 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.031 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.031 [2024-11-20 07:12:14.202419] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.031 [2024-11-20 07:12:14.202474] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.031 [2024-11-20 07:12:14.205810] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.031 [2024-11-20 07:12:14.205889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.032 [2024-11-20 07:12:14.205983] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.032 [2024-11-20 07:12:14.206010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:17.032 { 00:15:17.032 "results": [ 00:15:17.032 { 00:15:17.032 "job": "raid_bdev1", 00:15:17.032 "core_mask": "0x1", 00:15:17.032 "workload": "randrw", 00:15:17.032 "percentage": 50, 00:15:17.032 "status": "finished", 00:15:17.032 "queue_depth": 1, 00:15:17.032 "io_size": 131072, 00:15:17.032 "runtime": 1.434086, 00:15:17.032 "iops": 9866.214439022486, 00:15:17.032 "mibps": 1233.2768048778107, 00:15:17.032 "io_failed": 1, 00:15:17.032 "io_timeout": 0, 00:15:17.032 "avg_latency_us": 141.03165820751687, 00:15:17.032 "min_latency_us": 40.96, 00:15:17.032 "max_latency_us": 1861.8181818181818 00:15:17.032 } 00:15:17.032 ], 00:15:17.032 "core_count": 1 00:15:17.032 } 00:15:17.032 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.032 07:12:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73097 00:15:17.032 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73097 ']' 00:15:17.032 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73097 00:15:17.032 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:15:17.032 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.032 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73097 00:15:17.032 killing process with pid 73097 00:15:17.032 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.032 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.032 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73097' 00:15:17.032 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73097 00:15:17.032 [2024-11-20 07:12:14.252064] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.032 07:12:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73097 00:15:17.290 [2024-11-20 07:12:14.544817] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.668 07:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.p8ONxOmu7L 00:15:18.668 07:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:18.668 07:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:18.668 07:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:15:18.668 07:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:15:18.668 07:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:18.668 07:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:15:18.668 07:12:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:15:18.668 00:15:18.668 real 0m4.947s 00:15:18.668 user 0m6.127s 00:15:18.668 sys 0m0.626s 00:15:18.668 ************************************ 00:15:18.668 07:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.668 07:12:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.668 END TEST raid_write_error_test 00:15:18.668 ************************************ 00:15:18.668 07:12:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:15:18.668 07:12:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:15:18.668 07:12:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:18.668 07:12:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.668 07:12:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.668 ************************************ 00:15:18.668 START TEST raid_state_function_test 00:15:18.668 ************************************ 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:18.668 Process raid pid: 73245 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73245 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73245' 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73245 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73245 ']' 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.668 07:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.668 [2024-11-20 07:12:15.787490] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:15:18.668 [2024-11-20 07:12:15.787668] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.668 [2024-11-20 07:12:15.971507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.926 [2024-11-20 07:12:16.125493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.185 [2024-11-20 07:12:16.334152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.185 [2024-11-20 07:12:16.334206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.779 [2024-11-20 07:12:16.837126] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:19.779 [2024-11-20 07:12:16.837336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:19.779 [2024-11-20 07:12:16.837367] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:19.779 [2024-11-20 07:12:16.837387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:19.779 [2024-11-20 07:12:16.837398] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:19.779 [2024-11-20 07:12:16.837412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:19.779 [2024-11-20 07:12:16.837422] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:19.779 [2024-11-20 07:12:16.837436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.779 "name": "Existed_Raid", 00:15:19.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.779 "strip_size_kb": 0, 00:15:19.779 "state": "configuring", 00:15:19.779 "raid_level": "raid1", 00:15:19.779 "superblock": false, 00:15:19.779 "num_base_bdevs": 4, 00:15:19.779 "num_base_bdevs_discovered": 0, 00:15:19.779 "num_base_bdevs_operational": 4, 00:15:19.779 "base_bdevs_list": [ 00:15:19.779 { 00:15:19.779 "name": "BaseBdev1", 00:15:19.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.779 "is_configured": false, 00:15:19.779 "data_offset": 0, 00:15:19.779 "data_size": 0 00:15:19.779 }, 00:15:19.779 { 00:15:19.779 "name": "BaseBdev2", 00:15:19.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.779 "is_configured": false, 00:15:19.779 "data_offset": 0, 00:15:19.779 "data_size": 0 00:15:19.779 }, 00:15:19.779 { 00:15:19.779 "name": "BaseBdev3", 00:15:19.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.779 "is_configured": false, 00:15:19.779 "data_offset": 0, 00:15:19.779 "data_size": 0 00:15:19.779 }, 00:15:19.779 { 00:15:19.779 "name": "BaseBdev4", 00:15:19.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.779 "is_configured": false, 00:15:19.779 "data_offset": 0, 00:15:19.779 "data_size": 0 00:15:19.779 } 00:15:19.779 ] 00:15:19.779 }' 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.779 07:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.038 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:20.038 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.038 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.297 [2024-11-20 07:12:17.353227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:20.297 [2024-11-20 07:12:17.353289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.297 [2024-11-20 07:12:17.361214] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:20.297 [2024-11-20 07:12:17.361394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:20.297 [2024-11-20 07:12:17.361518] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.297 [2024-11-20 07:12:17.361654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.297 [2024-11-20 07:12:17.361768] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:20.297 [2024-11-20 07:12:17.361948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:20.297 [2024-11-20 07:12:17.362072] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:20.297 [2024-11-20 07:12:17.362141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.297 [2024-11-20 07:12:17.407124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.297 BaseBdev1 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.297 [ 00:15:20.297 { 00:15:20.297 "name": "BaseBdev1", 00:15:20.297 "aliases": [ 00:15:20.297 "685623c7-1af6-4734-bb05-f91b13c628d5" 00:15:20.297 ], 00:15:20.297 "product_name": "Malloc disk", 00:15:20.297 "block_size": 512, 00:15:20.297 "num_blocks": 65536, 00:15:20.297 "uuid": "685623c7-1af6-4734-bb05-f91b13c628d5", 00:15:20.297 "assigned_rate_limits": { 00:15:20.297 "rw_ios_per_sec": 0, 00:15:20.297 "rw_mbytes_per_sec": 0, 00:15:20.297 "r_mbytes_per_sec": 0, 00:15:20.297 "w_mbytes_per_sec": 0 00:15:20.297 }, 00:15:20.297 "claimed": true, 00:15:20.297 "claim_type": "exclusive_write", 00:15:20.297 "zoned": false, 00:15:20.297 "supported_io_types": { 00:15:20.297 "read": true, 00:15:20.297 "write": true, 00:15:20.297 "unmap": true, 00:15:20.297 "flush": true, 00:15:20.297 "reset": true, 00:15:20.297 "nvme_admin": false, 00:15:20.297 "nvme_io": false, 00:15:20.297 "nvme_io_md": false, 00:15:20.297 "write_zeroes": true, 00:15:20.297 "zcopy": true, 00:15:20.297 "get_zone_info": false, 00:15:20.297 "zone_management": false, 00:15:20.297 "zone_append": false, 00:15:20.297 "compare": false, 00:15:20.297 "compare_and_write": false, 00:15:20.297 "abort": true, 00:15:20.297 "seek_hole": false, 00:15:20.297 "seek_data": false, 00:15:20.297 "copy": true, 00:15:20.297 "nvme_iov_md": false 00:15:20.297 }, 00:15:20.297 "memory_domains": [ 00:15:20.297 { 00:15:20.297 "dma_device_id": "system", 00:15:20.297 "dma_device_type": 1 00:15:20.297 }, 00:15:20.297 { 00:15:20.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.297 "dma_device_type": 2 00:15:20.297 } 00:15:20.297 ], 00:15:20.297 "driver_specific": {} 00:15:20.297 } 00:15:20.297 ] 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.297 "name": "Existed_Raid", 00:15:20.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.297 "strip_size_kb": 0, 00:15:20.297 "state": "configuring", 00:15:20.297 "raid_level": "raid1", 00:15:20.297 "superblock": false, 00:15:20.297 "num_base_bdevs": 4, 00:15:20.297 "num_base_bdevs_discovered": 1, 00:15:20.297 "num_base_bdevs_operational": 4, 00:15:20.297 "base_bdevs_list": [ 00:15:20.297 { 00:15:20.297 "name": "BaseBdev1", 00:15:20.297 "uuid": "685623c7-1af6-4734-bb05-f91b13c628d5", 00:15:20.297 "is_configured": true, 00:15:20.297 "data_offset": 0, 00:15:20.297 "data_size": 65536 00:15:20.297 }, 00:15:20.297 { 00:15:20.297 "name": "BaseBdev2", 00:15:20.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.297 "is_configured": false, 00:15:20.297 "data_offset": 0, 00:15:20.297 "data_size": 0 00:15:20.297 }, 00:15:20.297 { 00:15:20.297 "name": "BaseBdev3", 00:15:20.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.297 "is_configured": false, 00:15:20.297 "data_offset": 0, 00:15:20.297 "data_size": 0 00:15:20.297 }, 00:15:20.297 { 00:15:20.297 "name": "BaseBdev4", 00:15:20.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.297 "is_configured": false, 00:15:20.297 "data_offset": 0, 00:15:20.297 "data_size": 0 00:15:20.297 } 00:15:20.297 ] 00:15:20.297 }' 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.297 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.864 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:20.864 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.864 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.864 [2024-11-20 07:12:17.975336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:20.864 [2024-11-20 07:12:17.975534] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:20.864 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.864 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:20.864 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.864 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.864 [2024-11-20 07:12:17.987420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.864 [2024-11-20 07:12:17.990053] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:20.864 [2024-11-20 07:12:17.990233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:20.864 [2024-11-20 07:12:17.990358] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:20.865 [2024-11-20 07:12:17.990396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:20.865 [2024-11-20 07:12:17.990410] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:20.865 [2024-11-20 07:12:17.990425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.865 07:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.865 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.865 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.865 "name": "Existed_Raid", 00:15:20.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.865 "strip_size_kb": 0, 00:15:20.865 "state": "configuring", 00:15:20.865 "raid_level": "raid1", 00:15:20.865 "superblock": false, 00:15:20.865 "num_base_bdevs": 4, 00:15:20.865 "num_base_bdevs_discovered": 1, 00:15:20.865 "num_base_bdevs_operational": 4, 00:15:20.865 "base_bdevs_list": [ 00:15:20.865 { 00:15:20.865 "name": "BaseBdev1", 00:15:20.865 "uuid": "685623c7-1af6-4734-bb05-f91b13c628d5", 00:15:20.865 "is_configured": true, 00:15:20.865 "data_offset": 0, 00:15:20.865 "data_size": 65536 00:15:20.865 }, 00:15:20.865 { 00:15:20.865 "name": "BaseBdev2", 00:15:20.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.865 "is_configured": false, 00:15:20.865 "data_offset": 0, 00:15:20.865 "data_size": 0 00:15:20.865 }, 00:15:20.865 { 00:15:20.865 "name": "BaseBdev3", 00:15:20.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.865 "is_configured": false, 00:15:20.865 "data_offset": 0, 00:15:20.865 "data_size": 0 00:15:20.865 }, 00:15:20.865 { 00:15:20.865 "name": "BaseBdev4", 00:15:20.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.865 "is_configured": false, 00:15:20.865 "data_offset": 0, 00:15:20.865 "data_size": 0 00:15:20.865 } 00:15:20.865 ] 00:15:20.865 }' 00:15:20.865 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.865 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.432 [2024-11-20 07:12:18.558900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:21.432 BaseBdev2 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.432 [ 00:15:21.432 { 00:15:21.432 "name": "BaseBdev2", 00:15:21.432 "aliases": [ 00:15:21.432 "d22b9ae2-23ec-4377-bfc6-ed6d2344e05c" 00:15:21.432 ], 00:15:21.432 "product_name": "Malloc disk", 00:15:21.432 "block_size": 512, 00:15:21.432 "num_blocks": 65536, 00:15:21.432 "uuid": "d22b9ae2-23ec-4377-bfc6-ed6d2344e05c", 00:15:21.432 "assigned_rate_limits": { 00:15:21.432 "rw_ios_per_sec": 0, 00:15:21.432 "rw_mbytes_per_sec": 0, 00:15:21.432 "r_mbytes_per_sec": 0, 00:15:21.432 "w_mbytes_per_sec": 0 00:15:21.432 }, 00:15:21.432 "claimed": true, 00:15:21.432 "claim_type": "exclusive_write", 00:15:21.432 "zoned": false, 00:15:21.432 "supported_io_types": { 00:15:21.432 "read": true, 00:15:21.432 "write": true, 00:15:21.432 "unmap": true, 00:15:21.432 "flush": true, 00:15:21.432 "reset": true, 00:15:21.432 "nvme_admin": false, 00:15:21.432 "nvme_io": false, 00:15:21.432 "nvme_io_md": false, 00:15:21.432 "write_zeroes": true, 00:15:21.432 "zcopy": true, 00:15:21.432 "get_zone_info": false, 00:15:21.432 "zone_management": false, 00:15:21.432 "zone_append": false, 00:15:21.432 "compare": false, 00:15:21.432 "compare_and_write": false, 00:15:21.432 "abort": true, 00:15:21.432 "seek_hole": false, 00:15:21.432 "seek_data": false, 00:15:21.432 "copy": true, 00:15:21.432 "nvme_iov_md": false 00:15:21.432 }, 00:15:21.432 "memory_domains": [ 00:15:21.432 { 00:15:21.432 "dma_device_id": "system", 00:15:21.432 "dma_device_type": 1 00:15:21.432 }, 00:15:21.432 { 00:15:21.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:21.432 "dma_device_type": 2 00:15:21.432 } 00:15:21.432 ], 00:15:21.432 "driver_specific": {} 00:15:21.432 } 00:15:21.432 ] 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.432 "name": "Existed_Raid", 00:15:21.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.432 "strip_size_kb": 0, 00:15:21.432 "state": "configuring", 00:15:21.432 "raid_level": "raid1", 00:15:21.432 "superblock": false, 00:15:21.432 "num_base_bdevs": 4, 00:15:21.432 "num_base_bdevs_discovered": 2, 00:15:21.432 "num_base_bdevs_operational": 4, 00:15:21.432 "base_bdevs_list": [ 00:15:21.432 { 00:15:21.432 "name": "BaseBdev1", 00:15:21.432 "uuid": "685623c7-1af6-4734-bb05-f91b13c628d5", 00:15:21.432 "is_configured": true, 00:15:21.432 "data_offset": 0, 00:15:21.432 "data_size": 65536 00:15:21.432 }, 00:15:21.432 { 00:15:21.432 "name": "BaseBdev2", 00:15:21.432 "uuid": "d22b9ae2-23ec-4377-bfc6-ed6d2344e05c", 00:15:21.432 "is_configured": true, 00:15:21.432 "data_offset": 0, 00:15:21.432 "data_size": 65536 00:15:21.432 }, 00:15:21.432 { 00:15:21.432 "name": "BaseBdev3", 00:15:21.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.432 "is_configured": false, 00:15:21.432 "data_offset": 0, 00:15:21.432 "data_size": 0 00:15:21.432 }, 00:15:21.432 { 00:15:21.432 "name": "BaseBdev4", 00:15:21.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.432 "is_configured": false, 00:15:21.432 "data_offset": 0, 00:15:21.432 "data_size": 0 00:15:21.432 } 00:15:21.432 ] 00:15:21.432 }' 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.432 07:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.026 [2024-11-20 07:12:19.147249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:22.026 BaseBdev3 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.026 [ 00:15:22.026 { 00:15:22.026 "name": "BaseBdev3", 00:15:22.026 "aliases": [ 00:15:22.026 "bffbcae8-8737-41f7-a7f1-4b3ea1db7ec0" 00:15:22.026 ], 00:15:22.026 "product_name": "Malloc disk", 00:15:22.026 "block_size": 512, 00:15:22.026 "num_blocks": 65536, 00:15:22.026 "uuid": "bffbcae8-8737-41f7-a7f1-4b3ea1db7ec0", 00:15:22.026 "assigned_rate_limits": { 00:15:22.026 "rw_ios_per_sec": 0, 00:15:22.026 "rw_mbytes_per_sec": 0, 00:15:22.026 "r_mbytes_per_sec": 0, 00:15:22.026 "w_mbytes_per_sec": 0 00:15:22.026 }, 00:15:22.026 "claimed": true, 00:15:22.026 "claim_type": "exclusive_write", 00:15:22.026 "zoned": false, 00:15:22.026 "supported_io_types": { 00:15:22.026 "read": true, 00:15:22.026 "write": true, 00:15:22.026 "unmap": true, 00:15:22.026 "flush": true, 00:15:22.026 "reset": true, 00:15:22.026 "nvme_admin": false, 00:15:22.026 "nvme_io": false, 00:15:22.026 "nvme_io_md": false, 00:15:22.026 "write_zeroes": true, 00:15:22.026 "zcopy": true, 00:15:22.026 "get_zone_info": false, 00:15:22.026 "zone_management": false, 00:15:22.026 "zone_append": false, 00:15:22.026 "compare": false, 00:15:22.026 "compare_and_write": false, 00:15:22.026 "abort": true, 00:15:22.026 "seek_hole": false, 00:15:22.026 "seek_data": false, 00:15:22.026 "copy": true, 00:15:22.026 "nvme_iov_md": false 00:15:22.026 }, 00:15:22.026 "memory_domains": [ 00:15:22.026 { 00:15:22.026 "dma_device_id": "system", 00:15:22.026 "dma_device_type": 1 00:15:22.026 }, 00:15:22.026 { 00:15:22.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.026 "dma_device_type": 2 00:15:22.026 } 00:15:22.026 ], 00:15:22.026 "driver_specific": {} 00:15:22.026 } 00:15:22.026 ] 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.026 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.027 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.027 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.027 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.027 "name": "Existed_Raid", 00:15:22.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.027 "strip_size_kb": 0, 00:15:22.027 "state": "configuring", 00:15:22.027 "raid_level": "raid1", 00:15:22.027 "superblock": false, 00:15:22.027 "num_base_bdevs": 4, 00:15:22.027 "num_base_bdevs_discovered": 3, 00:15:22.027 "num_base_bdevs_operational": 4, 00:15:22.027 "base_bdevs_list": [ 00:15:22.027 { 00:15:22.027 "name": "BaseBdev1", 00:15:22.027 "uuid": "685623c7-1af6-4734-bb05-f91b13c628d5", 00:15:22.027 "is_configured": true, 00:15:22.027 "data_offset": 0, 00:15:22.027 "data_size": 65536 00:15:22.027 }, 00:15:22.027 { 00:15:22.027 "name": "BaseBdev2", 00:15:22.027 "uuid": "d22b9ae2-23ec-4377-bfc6-ed6d2344e05c", 00:15:22.027 "is_configured": true, 00:15:22.027 "data_offset": 0, 00:15:22.027 "data_size": 65536 00:15:22.027 }, 00:15:22.027 { 00:15:22.027 "name": "BaseBdev3", 00:15:22.027 "uuid": "bffbcae8-8737-41f7-a7f1-4b3ea1db7ec0", 00:15:22.027 "is_configured": true, 00:15:22.027 "data_offset": 0, 00:15:22.027 "data_size": 65536 00:15:22.027 }, 00:15:22.027 { 00:15:22.027 "name": "BaseBdev4", 00:15:22.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.027 "is_configured": false, 00:15:22.027 "data_offset": 0, 00:15:22.027 "data_size": 0 00:15:22.027 } 00:15:22.027 ] 00:15:22.027 }' 00:15:22.027 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.027 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.593 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:22.593 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.593 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.593 [2024-11-20 07:12:19.731750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:22.593 [2024-11-20 07:12:19.731840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:22.593 [2024-11-20 07:12:19.731857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:22.593 [2024-11-20 07:12:19.732283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:22.593 [2024-11-20 07:12:19.732538] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:22.594 [2024-11-20 07:12:19.732569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:22.594 [2024-11-20 07:12:19.733030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.594 BaseBdev4 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.594 [ 00:15:22.594 { 00:15:22.594 "name": "BaseBdev4", 00:15:22.594 "aliases": [ 00:15:22.594 "35a58133-8f59-4039-9a2a-13d02202d9e9" 00:15:22.594 ], 00:15:22.594 "product_name": "Malloc disk", 00:15:22.594 "block_size": 512, 00:15:22.594 "num_blocks": 65536, 00:15:22.594 "uuid": "35a58133-8f59-4039-9a2a-13d02202d9e9", 00:15:22.594 "assigned_rate_limits": { 00:15:22.594 "rw_ios_per_sec": 0, 00:15:22.594 "rw_mbytes_per_sec": 0, 00:15:22.594 "r_mbytes_per_sec": 0, 00:15:22.594 "w_mbytes_per_sec": 0 00:15:22.594 }, 00:15:22.594 "claimed": true, 00:15:22.594 "claim_type": "exclusive_write", 00:15:22.594 "zoned": false, 00:15:22.594 "supported_io_types": { 00:15:22.594 "read": true, 00:15:22.594 "write": true, 00:15:22.594 "unmap": true, 00:15:22.594 "flush": true, 00:15:22.594 "reset": true, 00:15:22.594 "nvme_admin": false, 00:15:22.594 "nvme_io": false, 00:15:22.594 "nvme_io_md": false, 00:15:22.594 "write_zeroes": true, 00:15:22.594 "zcopy": true, 00:15:22.594 "get_zone_info": false, 00:15:22.594 "zone_management": false, 00:15:22.594 "zone_append": false, 00:15:22.594 "compare": false, 00:15:22.594 "compare_and_write": false, 00:15:22.594 "abort": true, 00:15:22.594 "seek_hole": false, 00:15:22.594 "seek_data": false, 00:15:22.594 "copy": true, 00:15:22.594 "nvme_iov_md": false 00:15:22.594 }, 00:15:22.594 "memory_domains": [ 00:15:22.594 { 00:15:22.594 "dma_device_id": "system", 00:15:22.594 "dma_device_type": 1 00:15:22.594 }, 00:15:22.594 { 00:15:22.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.594 "dma_device_type": 2 00:15:22.594 } 00:15:22.594 ], 00:15:22.594 "driver_specific": {} 00:15:22.594 } 00:15:22.594 ] 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.594 "name": "Existed_Raid", 00:15:22.594 "uuid": "a6672d25-2413-4f25-b078-00dd1fd8e5bf", 00:15:22.594 "strip_size_kb": 0, 00:15:22.594 "state": "online", 00:15:22.594 "raid_level": "raid1", 00:15:22.594 "superblock": false, 00:15:22.594 "num_base_bdevs": 4, 00:15:22.594 "num_base_bdevs_discovered": 4, 00:15:22.594 "num_base_bdevs_operational": 4, 00:15:22.594 "base_bdevs_list": [ 00:15:22.594 { 00:15:22.594 "name": "BaseBdev1", 00:15:22.594 "uuid": "685623c7-1af6-4734-bb05-f91b13c628d5", 00:15:22.594 "is_configured": true, 00:15:22.594 "data_offset": 0, 00:15:22.594 "data_size": 65536 00:15:22.594 }, 00:15:22.594 { 00:15:22.594 "name": "BaseBdev2", 00:15:22.594 "uuid": "d22b9ae2-23ec-4377-bfc6-ed6d2344e05c", 00:15:22.594 "is_configured": true, 00:15:22.594 "data_offset": 0, 00:15:22.594 "data_size": 65536 00:15:22.594 }, 00:15:22.594 { 00:15:22.594 "name": "BaseBdev3", 00:15:22.594 "uuid": "bffbcae8-8737-41f7-a7f1-4b3ea1db7ec0", 00:15:22.594 "is_configured": true, 00:15:22.594 "data_offset": 0, 00:15:22.594 "data_size": 65536 00:15:22.594 }, 00:15:22.594 { 00:15:22.594 "name": "BaseBdev4", 00:15:22.594 "uuid": "35a58133-8f59-4039-9a2a-13d02202d9e9", 00:15:22.594 "is_configured": true, 00:15:22.594 "data_offset": 0, 00:15:22.594 "data_size": 65536 00:15:22.594 } 00:15:22.594 ] 00:15:22.594 }' 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.594 07:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.161 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:23.161 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:23.161 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:23.161 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:23.161 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:23.161 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:23.161 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:23.161 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:23.161 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.161 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.161 [2024-11-20 07:12:20.276344] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:23.161 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.161 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:23.161 "name": "Existed_Raid", 00:15:23.161 "aliases": [ 00:15:23.161 "a6672d25-2413-4f25-b078-00dd1fd8e5bf" 00:15:23.161 ], 00:15:23.161 "product_name": "Raid Volume", 00:15:23.161 "block_size": 512, 00:15:23.161 "num_blocks": 65536, 00:15:23.161 "uuid": "a6672d25-2413-4f25-b078-00dd1fd8e5bf", 00:15:23.161 "assigned_rate_limits": { 00:15:23.161 "rw_ios_per_sec": 0, 00:15:23.161 "rw_mbytes_per_sec": 0, 00:15:23.161 "r_mbytes_per_sec": 0, 00:15:23.161 "w_mbytes_per_sec": 0 00:15:23.161 }, 00:15:23.161 "claimed": false, 00:15:23.161 "zoned": false, 00:15:23.161 "supported_io_types": { 00:15:23.161 "read": true, 00:15:23.161 "write": true, 00:15:23.161 "unmap": false, 00:15:23.161 "flush": false, 00:15:23.161 "reset": true, 00:15:23.161 "nvme_admin": false, 00:15:23.161 "nvme_io": false, 00:15:23.161 "nvme_io_md": false, 00:15:23.161 "write_zeroes": true, 00:15:23.161 "zcopy": false, 00:15:23.161 "get_zone_info": false, 00:15:23.161 "zone_management": false, 00:15:23.161 "zone_append": false, 00:15:23.161 "compare": false, 00:15:23.161 "compare_and_write": false, 00:15:23.161 "abort": false, 00:15:23.161 "seek_hole": false, 00:15:23.161 "seek_data": false, 00:15:23.161 "copy": false, 00:15:23.161 "nvme_iov_md": false 00:15:23.161 }, 00:15:23.161 "memory_domains": [ 00:15:23.161 { 00:15:23.161 "dma_device_id": "system", 00:15:23.161 "dma_device_type": 1 00:15:23.161 }, 00:15:23.161 { 00:15:23.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.161 "dma_device_type": 2 00:15:23.161 }, 00:15:23.161 { 00:15:23.161 "dma_device_id": "system", 00:15:23.161 "dma_device_type": 1 00:15:23.161 }, 00:15:23.161 { 00:15:23.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.161 "dma_device_type": 2 00:15:23.161 }, 00:15:23.161 { 00:15:23.161 "dma_device_id": "system", 00:15:23.161 "dma_device_type": 1 00:15:23.161 }, 00:15:23.161 { 00:15:23.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.161 "dma_device_type": 2 00:15:23.161 }, 00:15:23.161 { 00:15:23.161 "dma_device_id": "system", 00:15:23.161 "dma_device_type": 1 00:15:23.161 }, 00:15:23.161 { 00:15:23.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.161 "dma_device_type": 2 00:15:23.161 } 00:15:23.161 ], 00:15:23.161 "driver_specific": { 00:15:23.161 "raid": { 00:15:23.161 "uuid": "a6672d25-2413-4f25-b078-00dd1fd8e5bf", 00:15:23.161 "strip_size_kb": 0, 00:15:23.161 "state": "online", 00:15:23.161 "raid_level": "raid1", 00:15:23.161 "superblock": false, 00:15:23.161 "num_base_bdevs": 4, 00:15:23.161 "num_base_bdevs_discovered": 4, 00:15:23.161 "num_base_bdevs_operational": 4, 00:15:23.161 "base_bdevs_list": [ 00:15:23.161 { 00:15:23.161 "name": "BaseBdev1", 00:15:23.161 "uuid": "685623c7-1af6-4734-bb05-f91b13c628d5", 00:15:23.161 "is_configured": true, 00:15:23.161 "data_offset": 0, 00:15:23.161 "data_size": 65536 00:15:23.161 }, 00:15:23.161 { 00:15:23.161 "name": "BaseBdev2", 00:15:23.161 "uuid": "d22b9ae2-23ec-4377-bfc6-ed6d2344e05c", 00:15:23.161 "is_configured": true, 00:15:23.161 "data_offset": 0, 00:15:23.161 "data_size": 65536 00:15:23.161 }, 00:15:23.161 { 00:15:23.161 "name": "BaseBdev3", 00:15:23.161 "uuid": "bffbcae8-8737-41f7-a7f1-4b3ea1db7ec0", 00:15:23.161 "is_configured": true, 00:15:23.161 "data_offset": 0, 00:15:23.161 "data_size": 65536 00:15:23.161 }, 00:15:23.161 { 00:15:23.161 "name": "BaseBdev4", 00:15:23.161 "uuid": "35a58133-8f59-4039-9a2a-13d02202d9e9", 00:15:23.162 "is_configured": true, 00:15:23.162 "data_offset": 0, 00:15:23.162 "data_size": 65536 00:15:23.162 } 00:15:23.162 ] 00:15:23.162 } 00:15:23.162 } 00:15:23.162 }' 00:15:23.162 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:23.162 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:23.162 BaseBdev2 00:15:23.162 BaseBdev3 00:15:23.162 BaseBdev4' 00:15:23.162 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.162 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:23.162 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.162 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:23.162 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.162 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.162 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.162 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.421 [2024-11-20 07:12:20.644106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.421 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.689 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.689 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.689 "name": "Existed_Raid", 00:15:23.689 "uuid": "a6672d25-2413-4f25-b078-00dd1fd8e5bf", 00:15:23.689 "strip_size_kb": 0, 00:15:23.689 "state": "online", 00:15:23.689 "raid_level": "raid1", 00:15:23.689 "superblock": false, 00:15:23.689 "num_base_bdevs": 4, 00:15:23.689 "num_base_bdevs_discovered": 3, 00:15:23.689 "num_base_bdevs_operational": 3, 00:15:23.689 "base_bdevs_list": [ 00:15:23.689 { 00:15:23.689 "name": null, 00:15:23.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.689 "is_configured": false, 00:15:23.689 "data_offset": 0, 00:15:23.689 "data_size": 65536 00:15:23.689 }, 00:15:23.689 { 00:15:23.689 "name": "BaseBdev2", 00:15:23.689 "uuid": "d22b9ae2-23ec-4377-bfc6-ed6d2344e05c", 00:15:23.689 "is_configured": true, 00:15:23.689 "data_offset": 0, 00:15:23.689 "data_size": 65536 00:15:23.689 }, 00:15:23.689 { 00:15:23.689 "name": "BaseBdev3", 00:15:23.689 "uuid": "bffbcae8-8737-41f7-a7f1-4b3ea1db7ec0", 00:15:23.689 "is_configured": true, 00:15:23.689 "data_offset": 0, 00:15:23.689 "data_size": 65536 00:15:23.689 }, 00:15:23.689 { 00:15:23.689 "name": "BaseBdev4", 00:15:23.689 "uuid": "35a58133-8f59-4039-9a2a-13d02202d9e9", 00:15:23.689 "is_configured": true, 00:15:23.689 "data_offset": 0, 00:15:23.689 "data_size": 65536 00:15:23.689 } 00:15:23.689 ] 00:15:23.689 }' 00:15:23.689 07:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.689 07:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.986 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:23.986 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:23.986 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:23.986 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.986 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.986 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.986 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.986 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:23.986 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:23.986 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:23.986 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.986 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.986 [2024-11-20 07:12:21.300741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.244 [2024-11-20 07:12:21.439634] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.244 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.502 [2024-11-20 07:12:21.582626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:24.502 [2024-11-20 07:12:21.582903] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:24.502 [2024-11-20 07:12:21.667187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.502 [2024-11-20 07:12:21.667263] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.502 [2024-11-20 07:12:21.667283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.502 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.503 BaseBdev2 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.503 [ 00:15:24.503 { 00:15:24.503 "name": "BaseBdev2", 00:15:24.503 "aliases": [ 00:15:24.503 "8bf073f3-d78a-4741-9da9-f378d31a3ced" 00:15:24.503 ], 00:15:24.503 "product_name": "Malloc disk", 00:15:24.503 "block_size": 512, 00:15:24.503 "num_blocks": 65536, 00:15:24.503 "uuid": "8bf073f3-d78a-4741-9da9-f378d31a3ced", 00:15:24.503 "assigned_rate_limits": { 00:15:24.503 "rw_ios_per_sec": 0, 00:15:24.503 "rw_mbytes_per_sec": 0, 00:15:24.503 "r_mbytes_per_sec": 0, 00:15:24.503 "w_mbytes_per_sec": 0 00:15:24.503 }, 00:15:24.503 "claimed": false, 00:15:24.503 "zoned": false, 00:15:24.503 "supported_io_types": { 00:15:24.503 "read": true, 00:15:24.503 "write": true, 00:15:24.503 "unmap": true, 00:15:24.503 "flush": true, 00:15:24.503 "reset": true, 00:15:24.503 "nvme_admin": false, 00:15:24.503 "nvme_io": false, 00:15:24.503 "nvme_io_md": false, 00:15:24.503 "write_zeroes": true, 00:15:24.503 "zcopy": true, 00:15:24.503 "get_zone_info": false, 00:15:24.503 "zone_management": false, 00:15:24.503 "zone_append": false, 00:15:24.503 "compare": false, 00:15:24.503 "compare_and_write": false, 00:15:24.503 "abort": true, 00:15:24.503 "seek_hole": false, 00:15:24.503 "seek_data": false, 00:15:24.503 "copy": true, 00:15:24.503 "nvme_iov_md": false 00:15:24.503 }, 00:15:24.503 "memory_domains": [ 00:15:24.503 { 00:15:24.503 "dma_device_id": "system", 00:15:24.503 "dma_device_type": 1 00:15:24.503 }, 00:15:24.503 { 00:15:24.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.503 "dma_device_type": 2 00:15:24.503 } 00:15:24.503 ], 00:15:24.503 "driver_specific": {} 00:15:24.503 } 00:15:24.503 ] 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.503 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.762 BaseBdev3 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.762 [ 00:15:24.762 { 00:15:24.762 "name": "BaseBdev3", 00:15:24.762 "aliases": [ 00:15:24.762 "9b865f74-0fb4-4c69-9437-1464005cb754" 00:15:24.762 ], 00:15:24.762 "product_name": "Malloc disk", 00:15:24.762 "block_size": 512, 00:15:24.762 "num_blocks": 65536, 00:15:24.762 "uuid": "9b865f74-0fb4-4c69-9437-1464005cb754", 00:15:24.762 "assigned_rate_limits": { 00:15:24.762 "rw_ios_per_sec": 0, 00:15:24.762 "rw_mbytes_per_sec": 0, 00:15:24.762 "r_mbytes_per_sec": 0, 00:15:24.762 "w_mbytes_per_sec": 0 00:15:24.762 }, 00:15:24.762 "claimed": false, 00:15:24.762 "zoned": false, 00:15:24.762 "supported_io_types": { 00:15:24.762 "read": true, 00:15:24.762 "write": true, 00:15:24.762 "unmap": true, 00:15:24.762 "flush": true, 00:15:24.762 "reset": true, 00:15:24.762 "nvme_admin": false, 00:15:24.762 "nvme_io": false, 00:15:24.762 "nvme_io_md": false, 00:15:24.762 "write_zeroes": true, 00:15:24.762 "zcopy": true, 00:15:24.762 "get_zone_info": false, 00:15:24.762 "zone_management": false, 00:15:24.762 "zone_append": false, 00:15:24.762 "compare": false, 00:15:24.762 "compare_and_write": false, 00:15:24.762 "abort": true, 00:15:24.762 "seek_hole": false, 00:15:24.762 "seek_data": false, 00:15:24.762 "copy": true, 00:15:24.762 "nvme_iov_md": false 00:15:24.762 }, 00:15:24.762 "memory_domains": [ 00:15:24.762 { 00:15:24.762 "dma_device_id": "system", 00:15:24.762 "dma_device_type": 1 00:15:24.762 }, 00:15:24.762 { 00:15:24.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.762 "dma_device_type": 2 00:15:24.762 } 00:15:24.762 ], 00:15:24.762 "driver_specific": {} 00:15:24.762 } 00:15:24.762 ] 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.762 BaseBdev4 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.762 [ 00:15:24.762 { 00:15:24.762 "name": "BaseBdev4", 00:15:24.762 "aliases": [ 00:15:24.762 "ea4c97fa-6309-4926-b312-f399e7e3ac02" 00:15:24.762 ], 00:15:24.762 "product_name": "Malloc disk", 00:15:24.762 "block_size": 512, 00:15:24.762 "num_blocks": 65536, 00:15:24.762 "uuid": "ea4c97fa-6309-4926-b312-f399e7e3ac02", 00:15:24.762 "assigned_rate_limits": { 00:15:24.762 "rw_ios_per_sec": 0, 00:15:24.762 "rw_mbytes_per_sec": 0, 00:15:24.762 "r_mbytes_per_sec": 0, 00:15:24.762 "w_mbytes_per_sec": 0 00:15:24.762 }, 00:15:24.762 "claimed": false, 00:15:24.762 "zoned": false, 00:15:24.762 "supported_io_types": { 00:15:24.762 "read": true, 00:15:24.762 "write": true, 00:15:24.762 "unmap": true, 00:15:24.762 "flush": true, 00:15:24.762 "reset": true, 00:15:24.762 "nvme_admin": false, 00:15:24.762 "nvme_io": false, 00:15:24.762 "nvme_io_md": false, 00:15:24.762 "write_zeroes": true, 00:15:24.762 "zcopy": true, 00:15:24.762 "get_zone_info": false, 00:15:24.762 "zone_management": false, 00:15:24.762 "zone_append": false, 00:15:24.762 "compare": false, 00:15:24.762 "compare_and_write": false, 00:15:24.762 "abort": true, 00:15:24.762 "seek_hole": false, 00:15:24.762 "seek_data": false, 00:15:24.762 "copy": true, 00:15:24.762 "nvme_iov_md": false 00:15:24.762 }, 00:15:24.762 "memory_domains": [ 00:15:24.762 { 00:15:24.762 "dma_device_id": "system", 00:15:24.762 "dma_device_type": 1 00:15:24.762 }, 00:15:24.762 { 00:15:24.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.762 "dma_device_type": 2 00:15:24.762 } 00:15:24.762 ], 00:15:24.762 "driver_specific": {} 00:15:24.762 } 00:15:24.762 ] 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.762 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.763 [2024-11-20 07:12:21.942398] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.763 [2024-11-20 07:12:21.942581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.763 [2024-11-20 07:12:21.942712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.763 [2024-11-20 07:12:21.945126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.763 [2024-11-20 07:12:21.945310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.763 "name": "Existed_Raid", 00:15:24.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.763 "strip_size_kb": 0, 00:15:24.763 "state": "configuring", 00:15:24.763 "raid_level": "raid1", 00:15:24.763 "superblock": false, 00:15:24.763 "num_base_bdevs": 4, 00:15:24.763 "num_base_bdevs_discovered": 3, 00:15:24.763 "num_base_bdevs_operational": 4, 00:15:24.763 "base_bdevs_list": [ 00:15:24.763 { 00:15:24.763 "name": "BaseBdev1", 00:15:24.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.763 "is_configured": false, 00:15:24.763 "data_offset": 0, 00:15:24.763 "data_size": 0 00:15:24.763 }, 00:15:24.763 { 00:15:24.763 "name": "BaseBdev2", 00:15:24.763 "uuid": "8bf073f3-d78a-4741-9da9-f378d31a3ced", 00:15:24.763 "is_configured": true, 00:15:24.763 "data_offset": 0, 00:15:24.763 "data_size": 65536 00:15:24.763 }, 00:15:24.763 { 00:15:24.763 "name": "BaseBdev3", 00:15:24.763 "uuid": "9b865f74-0fb4-4c69-9437-1464005cb754", 00:15:24.763 "is_configured": true, 00:15:24.763 "data_offset": 0, 00:15:24.763 "data_size": 65536 00:15:24.763 }, 00:15:24.763 { 00:15:24.763 "name": "BaseBdev4", 00:15:24.763 "uuid": "ea4c97fa-6309-4926-b312-f399e7e3ac02", 00:15:24.763 "is_configured": true, 00:15:24.763 "data_offset": 0, 00:15:24.763 "data_size": 65536 00:15:24.763 } 00:15:24.763 ] 00:15:24.763 }' 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.763 07:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.329 [2024-11-20 07:12:22.470574] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.329 "name": "Existed_Raid", 00:15:25.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.329 "strip_size_kb": 0, 00:15:25.329 "state": "configuring", 00:15:25.329 "raid_level": "raid1", 00:15:25.329 "superblock": false, 00:15:25.329 "num_base_bdevs": 4, 00:15:25.329 "num_base_bdevs_discovered": 2, 00:15:25.329 "num_base_bdevs_operational": 4, 00:15:25.329 "base_bdevs_list": [ 00:15:25.329 { 00:15:25.329 "name": "BaseBdev1", 00:15:25.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.329 "is_configured": false, 00:15:25.329 "data_offset": 0, 00:15:25.329 "data_size": 0 00:15:25.329 }, 00:15:25.329 { 00:15:25.329 "name": null, 00:15:25.329 "uuid": "8bf073f3-d78a-4741-9da9-f378d31a3ced", 00:15:25.329 "is_configured": false, 00:15:25.329 "data_offset": 0, 00:15:25.329 "data_size": 65536 00:15:25.329 }, 00:15:25.329 { 00:15:25.329 "name": "BaseBdev3", 00:15:25.329 "uuid": "9b865f74-0fb4-4c69-9437-1464005cb754", 00:15:25.329 "is_configured": true, 00:15:25.329 "data_offset": 0, 00:15:25.329 "data_size": 65536 00:15:25.329 }, 00:15:25.329 { 00:15:25.329 "name": "BaseBdev4", 00:15:25.329 "uuid": "ea4c97fa-6309-4926-b312-f399e7e3ac02", 00:15:25.329 "is_configured": true, 00:15:25.329 "data_offset": 0, 00:15:25.329 "data_size": 65536 00:15:25.329 } 00:15:25.329 ] 00:15:25.329 }' 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.329 07:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.896 [2024-11-20 07:12:23.108259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.896 BaseBdev1 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.896 [ 00:15:25.896 { 00:15:25.896 "name": "BaseBdev1", 00:15:25.896 "aliases": [ 00:15:25.896 "1ef2a6d8-609a-43b4-a5d9-1cc05936e5c9" 00:15:25.896 ], 00:15:25.896 "product_name": "Malloc disk", 00:15:25.896 "block_size": 512, 00:15:25.896 "num_blocks": 65536, 00:15:25.896 "uuid": "1ef2a6d8-609a-43b4-a5d9-1cc05936e5c9", 00:15:25.896 "assigned_rate_limits": { 00:15:25.896 "rw_ios_per_sec": 0, 00:15:25.896 "rw_mbytes_per_sec": 0, 00:15:25.896 "r_mbytes_per_sec": 0, 00:15:25.896 "w_mbytes_per_sec": 0 00:15:25.896 }, 00:15:25.896 "claimed": true, 00:15:25.896 "claim_type": "exclusive_write", 00:15:25.896 "zoned": false, 00:15:25.896 "supported_io_types": { 00:15:25.896 "read": true, 00:15:25.896 "write": true, 00:15:25.896 "unmap": true, 00:15:25.896 "flush": true, 00:15:25.896 "reset": true, 00:15:25.896 "nvme_admin": false, 00:15:25.896 "nvme_io": false, 00:15:25.896 "nvme_io_md": false, 00:15:25.896 "write_zeroes": true, 00:15:25.896 "zcopy": true, 00:15:25.896 "get_zone_info": false, 00:15:25.896 "zone_management": false, 00:15:25.896 "zone_append": false, 00:15:25.896 "compare": false, 00:15:25.896 "compare_and_write": false, 00:15:25.896 "abort": true, 00:15:25.896 "seek_hole": false, 00:15:25.896 "seek_data": false, 00:15:25.896 "copy": true, 00:15:25.896 "nvme_iov_md": false 00:15:25.896 }, 00:15:25.896 "memory_domains": [ 00:15:25.896 { 00:15:25.896 "dma_device_id": "system", 00:15:25.896 "dma_device_type": 1 00:15:25.896 }, 00:15:25.896 { 00:15:25.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.896 "dma_device_type": 2 00:15:25.896 } 00:15:25.896 ], 00:15:25.896 "driver_specific": {} 00:15:25.896 } 00:15:25.896 ] 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.896 "name": "Existed_Raid", 00:15:25.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.896 "strip_size_kb": 0, 00:15:25.896 "state": "configuring", 00:15:25.896 "raid_level": "raid1", 00:15:25.896 "superblock": false, 00:15:25.896 "num_base_bdevs": 4, 00:15:25.896 "num_base_bdevs_discovered": 3, 00:15:25.896 "num_base_bdevs_operational": 4, 00:15:25.896 "base_bdevs_list": [ 00:15:25.896 { 00:15:25.896 "name": "BaseBdev1", 00:15:25.896 "uuid": "1ef2a6d8-609a-43b4-a5d9-1cc05936e5c9", 00:15:25.896 "is_configured": true, 00:15:25.896 "data_offset": 0, 00:15:25.896 "data_size": 65536 00:15:25.896 }, 00:15:25.896 { 00:15:25.896 "name": null, 00:15:25.896 "uuid": "8bf073f3-d78a-4741-9da9-f378d31a3ced", 00:15:25.896 "is_configured": false, 00:15:25.896 "data_offset": 0, 00:15:25.896 "data_size": 65536 00:15:25.896 }, 00:15:25.896 { 00:15:25.896 "name": "BaseBdev3", 00:15:25.896 "uuid": "9b865f74-0fb4-4c69-9437-1464005cb754", 00:15:25.896 "is_configured": true, 00:15:25.896 "data_offset": 0, 00:15:25.896 "data_size": 65536 00:15:25.896 }, 00:15:25.896 { 00:15:25.896 "name": "BaseBdev4", 00:15:25.896 "uuid": "ea4c97fa-6309-4926-b312-f399e7e3ac02", 00:15:25.896 "is_configured": true, 00:15:25.896 "data_offset": 0, 00:15:25.896 "data_size": 65536 00:15:25.896 } 00:15:25.896 ] 00:15:25.896 }' 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.896 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.537 [2024-11-20 07:12:23.732558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.537 "name": "Existed_Raid", 00:15:26.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.537 "strip_size_kb": 0, 00:15:26.537 "state": "configuring", 00:15:26.537 "raid_level": "raid1", 00:15:26.537 "superblock": false, 00:15:26.537 "num_base_bdevs": 4, 00:15:26.537 "num_base_bdevs_discovered": 2, 00:15:26.537 "num_base_bdevs_operational": 4, 00:15:26.537 "base_bdevs_list": [ 00:15:26.537 { 00:15:26.537 "name": "BaseBdev1", 00:15:26.537 "uuid": "1ef2a6d8-609a-43b4-a5d9-1cc05936e5c9", 00:15:26.537 "is_configured": true, 00:15:26.537 "data_offset": 0, 00:15:26.537 "data_size": 65536 00:15:26.537 }, 00:15:26.537 { 00:15:26.537 "name": null, 00:15:26.537 "uuid": "8bf073f3-d78a-4741-9da9-f378d31a3ced", 00:15:26.537 "is_configured": false, 00:15:26.537 "data_offset": 0, 00:15:26.537 "data_size": 65536 00:15:26.537 }, 00:15:26.537 { 00:15:26.537 "name": null, 00:15:26.537 "uuid": "9b865f74-0fb4-4c69-9437-1464005cb754", 00:15:26.537 "is_configured": false, 00:15:26.537 "data_offset": 0, 00:15:26.537 "data_size": 65536 00:15:26.537 }, 00:15:26.537 { 00:15:26.537 "name": "BaseBdev4", 00:15:26.537 "uuid": "ea4c97fa-6309-4926-b312-f399e7e3ac02", 00:15:26.537 "is_configured": true, 00:15:26.537 "data_offset": 0, 00:15:26.537 "data_size": 65536 00:15:26.537 } 00:15:26.537 ] 00:15:26.537 }' 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.537 07:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.102 [2024-11-20 07:12:24.320706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.102 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.102 "name": "Existed_Raid", 00:15:27.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.103 "strip_size_kb": 0, 00:15:27.103 "state": "configuring", 00:15:27.103 "raid_level": "raid1", 00:15:27.103 "superblock": false, 00:15:27.103 "num_base_bdevs": 4, 00:15:27.103 "num_base_bdevs_discovered": 3, 00:15:27.103 "num_base_bdevs_operational": 4, 00:15:27.103 "base_bdevs_list": [ 00:15:27.103 { 00:15:27.103 "name": "BaseBdev1", 00:15:27.103 "uuid": "1ef2a6d8-609a-43b4-a5d9-1cc05936e5c9", 00:15:27.103 "is_configured": true, 00:15:27.103 "data_offset": 0, 00:15:27.103 "data_size": 65536 00:15:27.103 }, 00:15:27.103 { 00:15:27.103 "name": null, 00:15:27.103 "uuid": "8bf073f3-d78a-4741-9da9-f378d31a3ced", 00:15:27.103 "is_configured": false, 00:15:27.103 "data_offset": 0, 00:15:27.103 "data_size": 65536 00:15:27.103 }, 00:15:27.103 { 00:15:27.103 "name": "BaseBdev3", 00:15:27.103 "uuid": "9b865f74-0fb4-4c69-9437-1464005cb754", 00:15:27.103 "is_configured": true, 00:15:27.103 "data_offset": 0, 00:15:27.103 "data_size": 65536 00:15:27.103 }, 00:15:27.103 { 00:15:27.103 "name": "BaseBdev4", 00:15:27.103 "uuid": "ea4c97fa-6309-4926-b312-f399e7e3ac02", 00:15:27.103 "is_configured": true, 00:15:27.103 "data_offset": 0, 00:15:27.103 "data_size": 65536 00:15:27.103 } 00:15:27.103 ] 00:15:27.103 }' 00:15:27.103 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.103 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.689 [2024-11-20 07:12:24.892923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.689 07:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.977 07:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.977 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.977 "name": "Existed_Raid", 00:15:27.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.977 "strip_size_kb": 0, 00:15:27.977 "state": "configuring", 00:15:27.977 "raid_level": "raid1", 00:15:27.977 "superblock": false, 00:15:27.977 "num_base_bdevs": 4, 00:15:27.977 "num_base_bdevs_discovered": 2, 00:15:27.977 "num_base_bdevs_operational": 4, 00:15:27.977 "base_bdevs_list": [ 00:15:27.977 { 00:15:27.977 "name": null, 00:15:27.977 "uuid": "1ef2a6d8-609a-43b4-a5d9-1cc05936e5c9", 00:15:27.977 "is_configured": false, 00:15:27.977 "data_offset": 0, 00:15:27.977 "data_size": 65536 00:15:27.977 }, 00:15:27.977 { 00:15:27.977 "name": null, 00:15:27.977 "uuid": "8bf073f3-d78a-4741-9da9-f378d31a3ced", 00:15:27.977 "is_configured": false, 00:15:27.977 "data_offset": 0, 00:15:27.977 "data_size": 65536 00:15:27.977 }, 00:15:27.977 { 00:15:27.977 "name": "BaseBdev3", 00:15:27.977 "uuid": "9b865f74-0fb4-4c69-9437-1464005cb754", 00:15:27.977 "is_configured": true, 00:15:27.977 "data_offset": 0, 00:15:27.977 "data_size": 65536 00:15:27.977 }, 00:15:27.977 { 00:15:27.977 "name": "BaseBdev4", 00:15:27.977 "uuid": "ea4c97fa-6309-4926-b312-f399e7e3ac02", 00:15:27.977 "is_configured": true, 00:15:27.977 "data_offset": 0, 00:15:27.977 "data_size": 65536 00:15:27.977 } 00:15:27.977 ] 00:15:27.977 }' 00:15:27.977 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.977 07:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.235 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:28.235 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.235 07:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.235 07:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.235 07:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.493 [2024-11-20 07:12:25.565307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.493 "name": "Existed_Raid", 00:15:28.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.493 "strip_size_kb": 0, 00:15:28.493 "state": "configuring", 00:15:28.493 "raid_level": "raid1", 00:15:28.493 "superblock": false, 00:15:28.493 "num_base_bdevs": 4, 00:15:28.493 "num_base_bdevs_discovered": 3, 00:15:28.493 "num_base_bdevs_operational": 4, 00:15:28.493 "base_bdevs_list": [ 00:15:28.493 { 00:15:28.493 "name": null, 00:15:28.493 "uuid": "1ef2a6d8-609a-43b4-a5d9-1cc05936e5c9", 00:15:28.493 "is_configured": false, 00:15:28.493 "data_offset": 0, 00:15:28.493 "data_size": 65536 00:15:28.493 }, 00:15:28.493 { 00:15:28.493 "name": "BaseBdev2", 00:15:28.493 "uuid": "8bf073f3-d78a-4741-9da9-f378d31a3ced", 00:15:28.493 "is_configured": true, 00:15:28.493 "data_offset": 0, 00:15:28.493 "data_size": 65536 00:15:28.493 }, 00:15:28.493 { 00:15:28.493 "name": "BaseBdev3", 00:15:28.493 "uuid": "9b865f74-0fb4-4c69-9437-1464005cb754", 00:15:28.493 "is_configured": true, 00:15:28.493 "data_offset": 0, 00:15:28.493 "data_size": 65536 00:15:28.493 }, 00:15:28.493 { 00:15:28.493 "name": "BaseBdev4", 00:15:28.493 "uuid": "ea4c97fa-6309-4926-b312-f399e7e3ac02", 00:15:28.493 "is_configured": true, 00:15:28.493 "data_offset": 0, 00:15:28.493 "data_size": 65536 00:15:28.493 } 00:15:28.493 ] 00:15:28.493 }' 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.493 07:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1ef2a6d8-609a-43b4-a5d9-1cc05936e5c9 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.061 [2024-11-20 07:12:26.215652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:29.061 [2024-11-20 07:12:26.215974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:29.061 [2024-11-20 07:12:26.216005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:29.061 [2024-11-20 07:12:26.216337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:29.061 [2024-11-20 07:12:26.216556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:29.061 [2024-11-20 07:12:26.216572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:29.061 [2024-11-20 07:12:26.217053] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.061 NewBaseBdev 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.061 [ 00:15:29.061 { 00:15:29.061 "name": "NewBaseBdev", 00:15:29.061 "aliases": [ 00:15:29.061 "1ef2a6d8-609a-43b4-a5d9-1cc05936e5c9" 00:15:29.061 ], 00:15:29.061 "product_name": "Malloc disk", 00:15:29.061 "block_size": 512, 00:15:29.061 "num_blocks": 65536, 00:15:29.061 "uuid": "1ef2a6d8-609a-43b4-a5d9-1cc05936e5c9", 00:15:29.061 "assigned_rate_limits": { 00:15:29.061 "rw_ios_per_sec": 0, 00:15:29.061 "rw_mbytes_per_sec": 0, 00:15:29.061 "r_mbytes_per_sec": 0, 00:15:29.061 "w_mbytes_per_sec": 0 00:15:29.061 }, 00:15:29.061 "claimed": true, 00:15:29.061 "claim_type": "exclusive_write", 00:15:29.061 "zoned": false, 00:15:29.061 "supported_io_types": { 00:15:29.061 "read": true, 00:15:29.061 "write": true, 00:15:29.061 "unmap": true, 00:15:29.061 "flush": true, 00:15:29.061 "reset": true, 00:15:29.061 "nvme_admin": false, 00:15:29.061 "nvme_io": false, 00:15:29.061 "nvme_io_md": false, 00:15:29.061 "write_zeroes": true, 00:15:29.061 "zcopy": true, 00:15:29.061 "get_zone_info": false, 00:15:29.061 "zone_management": false, 00:15:29.061 "zone_append": false, 00:15:29.061 "compare": false, 00:15:29.061 "compare_and_write": false, 00:15:29.061 "abort": true, 00:15:29.061 "seek_hole": false, 00:15:29.061 "seek_data": false, 00:15:29.061 "copy": true, 00:15:29.061 "nvme_iov_md": false 00:15:29.061 }, 00:15:29.061 "memory_domains": [ 00:15:29.061 { 00:15:29.061 "dma_device_id": "system", 00:15:29.061 "dma_device_type": 1 00:15:29.061 }, 00:15:29.061 { 00:15:29.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.061 "dma_device_type": 2 00:15:29.061 } 00:15:29.061 ], 00:15:29.061 "driver_specific": {} 00:15:29.061 } 00:15:29.061 ] 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.061 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.062 "name": "Existed_Raid", 00:15:29.062 "uuid": "8fcac530-ff9c-4537-b1b6-c9b410cf7e95", 00:15:29.062 "strip_size_kb": 0, 00:15:29.062 "state": "online", 00:15:29.062 "raid_level": "raid1", 00:15:29.062 "superblock": false, 00:15:29.062 "num_base_bdevs": 4, 00:15:29.062 "num_base_bdevs_discovered": 4, 00:15:29.062 "num_base_bdevs_operational": 4, 00:15:29.062 "base_bdevs_list": [ 00:15:29.062 { 00:15:29.062 "name": "NewBaseBdev", 00:15:29.062 "uuid": "1ef2a6d8-609a-43b4-a5d9-1cc05936e5c9", 00:15:29.062 "is_configured": true, 00:15:29.062 "data_offset": 0, 00:15:29.062 "data_size": 65536 00:15:29.062 }, 00:15:29.062 { 00:15:29.062 "name": "BaseBdev2", 00:15:29.062 "uuid": "8bf073f3-d78a-4741-9da9-f378d31a3ced", 00:15:29.062 "is_configured": true, 00:15:29.062 "data_offset": 0, 00:15:29.062 "data_size": 65536 00:15:29.062 }, 00:15:29.062 { 00:15:29.062 "name": "BaseBdev3", 00:15:29.062 "uuid": "9b865f74-0fb4-4c69-9437-1464005cb754", 00:15:29.062 "is_configured": true, 00:15:29.062 "data_offset": 0, 00:15:29.062 "data_size": 65536 00:15:29.062 }, 00:15:29.062 { 00:15:29.062 "name": "BaseBdev4", 00:15:29.062 "uuid": "ea4c97fa-6309-4926-b312-f399e7e3ac02", 00:15:29.062 "is_configured": true, 00:15:29.062 "data_offset": 0, 00:15:29.062 "data_size": 65536 00:15:29.062 } 00:15:29.062 ] 00:15:29.062 }' 00:15:29.062 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.062 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.629 [2024-11-20 07:12:26.752270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:29.629 "name": "Existed_Raid", 00:15:29.629 "aliases": [ 00:15:29.629 "8fcac530-ff9c-4537-b1b6-c9b410cf7e95" 00:15:29.629 ], 00:15:29.629 "product_name": "Raid Volume", 00:15:29.629 "block_size": 512, 00:15:29.629 "num_blocks": 65536, 00:15:29.629 "uuid": "8fcac530-ff9c-4537-b1b6-c9b410cf7e95", 00:15:29.629 "assigned_rate_limits": { 00:15:29.629 "rw_ios_per_sec": 0, 00:15:29.629 "rw_mbytes_per_sec": 0, 00:15:29.629 "r_mbytes_per_sec": 0, 00:15:29.629 "w_mbytes_per_sec": 0 00:15:29.629 }, 00:15:29.629 "claimed": false, 00:15:29.629 "zoned": false, 00:15:29.629 "supported_io_types": { 00:15:29.629 "read": true, 00:15:29.629 "write": true, 00:15:29.629 "unmap": false, 00:15:29.629 "flush": false, 00:15:29.629 "reset": true, 00:15:29.629 "nvme_admin": false, 00:15:29.629 "nvme_io": false, 00:15:29.629 "nvme_io_md": false, 00:15:29.629 "write_zeroes": true, 00:15:29.629 "zcopy": false, 00:15:29.629 "get_zone_info": false, 00:15:29.629 "zone_management": false, 00:15:29.629 "zone_append": false, 00:15:29.629 "compare": false, 00:15:29.629 "compare_and_write": false, 00:15:29.629 "abort": false, 00:15:29.629 "seek_hole": false, 00:15:29.629 "seek_data": false, 00:15:29.629 "copy": false, 00:15:29.629 "nvme_iov_md": false 00:15:29.629 }, 00:15:29.629 "memory_domains": [ 00:15:29.629 { 00:15:29.629 "dma_device_id": "system", 00:15:29.629 "dma_device_type": 1 00:15:29.629 }, 00:15:29.629 { 00:15:29.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.629 "dma_device_type": 2 00:15:29.629 }, 00:15:29.629 { 00:15:29.629 "dma_device_id": "system", 00:15:29.629 "dma_device_type": 1 00:15:29.629 }, 00:15:29.629 { 00:15:29.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.629 "dma_device_type": 2 00:15:29.629 }, 00:15:29.629 { 00:15:29.629 "dma_device_id": "system", 00:15:29.629 "dma_device_type": 1 00:15:29.629 }, 00:15:29.629 { 00:15:29.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.629 "dma_device_type": 2 00:15:29.629 }, 00:15:29.629 { 00:15:29.629 "dma_device_id": "system", 00:15:29.629 "dma_device_type": 1 00:15:29.629 }, 00:15:29.629 { 00:15:29.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.629 "dma_device_type": 2 00:15:29.629 } 00:15:29.629 ], 00:15:29.629 "driver_specific": { 00:15:29.629 "raid": { 00:15:29.629 "uuid": "8fcac530-ff9c-4537-b1b6-c9b410cf7e95", 00:15:29.629 "strip_size_kb": 0, 00:15:29.629 "state": "online", 00:15:29.629 "raid_level": "raid1", 00:15:29.629 "superblock": false, 00:15:29.629 "num_base_bdevs": 4, 00:15:29.629 "num_base_bdevs_discovered": 4, 00:15:29.629 "num_base_bdevs_operational": 4, 00:15:29.629 "base_bdevs_list": [ 00:15:29.629 { 00:15:29.629 "name": "NewBaseBdev", 00:15:29.629 "uuid": "1ef2a6d8-609a-43b4-a5d9-1cc05936e5c9", 00:15:29.629 "is_configured": true, 00:15:29.629 "data_offset": 0, 00:15:29.629 "data_size": 65536 00:15:29.629 }, 00:15:29.629 { 00:15:29.629 "name": "BaseBdev2", 00:15:29.629 "uuid": "8bf073f3-d78a-4741-9da9-f378d31a3ced", 00:15:29.629 "is_configured": true, 00:15:29.629 "data_offset": 0, 00:15:29.629 "data_size": 65536 00:15:29.629 }, 00:15:29.629 { 00:15:29.629 "name": "BaseBdev3", 00:15:29.629 "uuid": "9b865f74-0fb4-4c69-9437-1464005cb754", 00:15:29.629 "is_configured": true, 00:15:29.629 "data_offset": 0, 00:15:29.629 "data_size": 65536 00:15:29.629 }, 00:15:29.629 { 00:15:29.629 "name": "BaseBdev4", 00:15:29.629 "uuid": "ea4c97fa-6309-4926-b312-f399e7e3ac02", 00:15:29.629 "is_configured": true, 00:15:29.629 "data_offset": 0, 00:15:29.629 "data_size": 65536 00:15:29.629 } 00:15:29.629 ] 00:15:29.629 } 00:15:29.629 } 00:15:29.629 }' 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:29.629 BaseBdev2 00:15:29.629 BaseBdev3 00:15:29.629 BaseBdev4' 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:29.629 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.630 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.630 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.630 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.630 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.630 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.630 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.630 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:29.630 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.630 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.630 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.889 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.889 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.889 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.889 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.889 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.889 07:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:29.889 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.889 07:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.889 [2024-11-20 07:12:27.123964] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:29.889 [2024-11-20 07:12:27.124117] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.889 [2024-11-20 07:12:27.124342] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.889 [2024-11-20 07:12:27.124805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.889 [2024-11-20 07:12:27.124840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73245 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73245 ']' 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73245 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73245 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73245' 00:15:29.889 killing process with pid 73245 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73245 00:15:29.889 [2024-11-20 07:12:27.162839] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.889 07:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73245 00:15:30.457 [2024-11-20 07:12:27.512498] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:31.392 ************************************ 00:15:31.392 END TEST raid_state_function_test 00:15:31.392 ************************************ 00:15:31.392 00:15:31.392 real 0m12.851s 00:15:31.392 user 0m21.510s 00:15:31.392 sys 0m1.666s 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.392 07:12:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:15:31.392 07:12:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:31.392 07:12:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.392 07:12:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:31.392 ************************************ 00:15:31.392 START TEST raid_state_function_test_sb 00:15:31.392 ************************************ 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:31.392 Process raid pid: 73923 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73923 00:15:31.392 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73923' 00:15:31.393 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73923 00:15:31.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.393 07:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73923 ']' 00:15:31.393 07:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:31.393 07:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.393 07:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.393 07:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.393 07:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.393 07:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.393 [2024-11-20 07:12:28.708781] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:15:31.393 [2024-11-20 07:12:28.709261] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.727 [2024-11-20 07:12:28.895576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.005 [2024-11-20 07:12:29.035384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.005 [2024-11-20 07:12:29.246222] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.005 [2024-11-20 07:12:29.246486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.575 [2024-11-20 07:12:29.694076] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:32.575 [2024-11-20 07:12:29.694144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:32.575 [2024-11-20 07:12:29.694161] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.575 [2024-11-20 07:12:29.694177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.575 [2024-11-20 07:12:29.694187] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:32.575 [2024-11-20 07:12:29.694201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:32.575 [2024-11-20 07:12:29.694210] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:32.575 [2024-11-20 07:12:29.694224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.575 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.575 "name": "Existed_Raid", 00:15:32.575 "uuid": "acb3e7ca-1b43-4c1e-a171-054cb50fe711", 00:15:32.575 "strip_size_kb": 0, 00:15:32.575 "state": "configuring", 00:15:32.575 "raid_level": "raid1", 00:15:32.575 "superblock": true, 00:15:32.575 "num_base_bdevs": 4, 00:15:32.575 "num_base_bdevs_discovered": 0, 00:15:32.575 "num_base_bdevs_operational": 4, 00:15:32.575 "base_bdevs_list": [ 00:15:32.575 { 00:15:32.575 "name": "BaseBdev1", 00:15:32.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.575 "is_configured": false, 00:15:32.575 "data_offset": 0, 00:15:32.575 "data_size": 0 00:15:32.575 }, 00:15:32.575 { 00:15:32.575 "name": "BaseBdev2", 00:15:32.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.575 "is_configured": false, 00:15:32.575 "data_offset": 0, 00:15:32.575 "data_size": 0 00:15:32.575 }, 00:15:32.575 { 00:15:32.575 "name": "BaseBdev3", 00:15:32.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.575 "is_configured": false, 00:15:32.575 "data_offset": 0, 00:15:32.575 "data_size": 0 00:15:32.575 }, 00:15:32.575 { 00:15:32.575 "name": "BaseBdev4", 00:15:32.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.576 "is_configured": false, 00:15:32.576 "data_offset": 0, 00:15:32.576 "data_size": 0 00:15:32.576 } 00:15:32.576 ] 00:15:32.576 }' 00:15:32.576 07:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.576 07:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.142 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.143 [2024-11-20 07:12:30.214132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.143 [2024-11-20 07:12:30.214180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.143 [2024-11-20 07:12:30.222108] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.143 [2024-11-20 07:12:30.222159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.143 [2024-11-20 07:12:30.222174] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.143 [2024-11-20 07:12:30.222189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.143 [2024-11-20 07:12:30.222199] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.143 [2024-11-20 07:12:30.222212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.143 [2024-11-20 07:12:30.222221] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:33.143 [2024-11-20 07:12:30.222235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.143 [2024-11-20 07:12:30.267086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.143 BaseBdev1 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.143 [ 00:15:33.143 { 00:15:33.143 "name": "BaseBdev1", 00:15:33.143 "aliases": [ 00:15:33.143 "a7d96eb8-968a-4bd7-8bdc-20e1d1c0a56a" 00:15:33.143 ], 00:15:33.143 "product_name": "Malloc disk", 00:15:33.143 "block_size": 512, 00:15:33.143 "num_blocks": 65536, 00:15:33.143 "uuid": "a7d96eb8-968a-4bd7-8bdc-20e1d1c0a56a", 00:15:33.143 "assigned_rate_limits": { 00:15:33.143 "rw_ios_per_sec": 0, 00:15:33.143 "rw_mbytes_per_sec": 0, 00:15:33.143 "r_mbytes_per_sec": 0, 00:15:33.143 "w_mbytes_per_sec": 0 00:15:33.143 }, 00:15:33.143 "claimed": true, 00:15:33.143 "claim_type": "exclusive_write", 00:15:33.143 "zoned": false, 00:15:33.143 "supported_io_types": { 00:15:33.143 "read": true, 00:15:33.143 "write": true, 00:15:33.143 "unmap": true, 00:15:33.143 "flush": true, 00:15:33.143 "reset": true, 00:15:33.143 "nvme_admin": false, 00:15:33.143 "nvme_io": false, 00:15:33.143 "nvme_io_md": false, 00:15:33.143 "write_zeroes": true, 00:15:33.143 "zcopy": true, 00:15:33.143 "get_zone_info": false, 00:15:33.143 "zone_management": false, 00:15:33.143 "zone_append": false, 00:15:33.143 "compare": false, 00:15:33.143 "compare_and_write": false, 00:15:33.143 "abort": true, 00:15:33.143 "seek_hole": false, 00:15:33.143 "seek_data": false, 00:15:33.143 "copy": true, 00:15:33.143 "nvme_iov_md": false 00:15:33.143 }, 00:15:33.143 "memory_domains": [ 00:15:33.143 { 00:15:33.143 "dma_device_id": "system", 00:15:33.143 "dma_device_type": 1 00:15:33.143 }, 00:15:33.143 { 00:15:33.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.143 "dma_device_type": 2 00:15:33.143 } 00:15:33.143 ], 00:15:33.143 "driver_specific": {} 00:15:33.143 } 00:15:33.143 ] 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.143 "name": "Existed_Raid", 00:15:33.143 "uuid": "96afb197-ab0a-40c6-9c51-8e61a386724a", 00:15:33.143 "strip_size_kb": 0, 00:15:33.143 "state": "configuring", 00:15:33.143 "raid_level": "raid1", 00:15:33.143 "superblock": true, 00:15:33.143 "num_base_bdevs": 4, 00:15:33.143 "num_base_bdevs_discovered": 1, 00:15:33.143 "num_base_bdevs_operational": 4, 00:15:33.143 "base_bdevs_list": [ 00:15:33.143 { 00:15:33.143 "name": "BaseBdev1", 00:15:33.143 "uuid": "a7d96eb8-968a-4bd7-8bdc-20e1d1c0a56a", 00:15:33.143 "is_configured": true, 00:15:33.143 "data_offset": 2048, 00:15:33.143 "data_size": 63488 00:15:33.143 }, 00:15:33.143 { 00:15:33.143 "name": "BaseBdev2", 00:15:33.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.143 "is_configured": false, 00:15:33.143 "data_offset": 0, 00:15:33.143 "data_size": 0 00:15:33.143 }, 00:15:33.143 { 00:15:33.143 "name": "BaseBdev3", 00:15:33.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.143 "is_configured": false, 00:15:33.143 "data_offset": 0, 00:15:33.143 "data_size": 0 00:15:33.143 }, 00:15:33.143 { 00:15:33.143 "name": "BaseBdev4", 00:15:33.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.143 "is_configured": false, 00:15:33.143 "data_offset": 0, 00:15:33.143 "data_size": 0 00:15:33.143 } 00:15:33.143 ] 00:15:33.143 }' 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.143 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.710 [2024-11-20 07:12:30.847342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.710 [2024-11-20 07:12:30.847401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.710 [2024-11-20 07:12:30.855381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.710 [2024-11-20 07:12:30.857993] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.710 [2024-11-20 07:12:30.858176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.710 [2024-11-20 07:12:30.858299] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.710 [2024-11-20 07:12:30.858361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.710 [2024-11-20 07:12:30.858464] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:33.710 [2024-11-20 07:12:30.858623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.710 "name": "Existed_Raid", 00:15:33.710 "uuid": "451085d6-8d54-476e-83d9-34987c9af230", 00:15:33.710 "strip_size_kb": 0, 00:15:33.710 "state": "configuring", 00:15:33.710 "raid_level": "raid1", 00:15:33.710 "superblock": true, 00:15:33.710 "num_base_bdevs": 4, 00:15:33.710 "num_base_bdevs_discovered": 1, 00:15:33.710 "num_base_bdevs_operational": 4, 00:15:33.710 "base_bdevs_list": [ 00:15:33.710 { 00:15:33.710 "name": "BaseBdev1", 00:15:33.710 "uuid": "a7d96eb8-968a-4bd7-8bdc-20e1d1c0a56a", 00:15:33.710 "is_configured": true, 00:15:33.710 "data_offset": 2048, 00:15:33.710 "data_size": 63488 00:15:33.710 }, 00:15:33.710 { 00:15:33.710 "name": "BaseBdev2", 00:15:33.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.710 "is_configured": false, 00:15:33.710 "data_offset": 0, 00:15:33.710 "data_size": 0 00:15:33.710 }, 00:15:33.710 { 00:15:33.710 "name": "BaseBdev3", 00:15:33.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.710 "is_configured": false, 00:15:33.710 "data_offset": 0, 00:15:33.710 "data_size": 0 00:15:33.710 }, 00:15:33.710 { 00:15:33.710 "name": "BaseBdev4", 00:15:33.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.710 "is_configured": false, 00:15:33.710 "data_offset": 0, 00:15:33.710 "data_size": 0 00:15:33.710 } 00:15:33.710 ] 00:15:33.710 }' 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.710 07:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.277 [2024-11-20 07:12:31.389888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.277 BaseBdev2 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.277 [ 00:15:34.277 { 00:15:34.277 "name": "BaseBdev2", 00:15:34.277 "aliases": [ 00:15:34.277 "dac039de-492e-4d12-8cbc-44dce57a4154" 00:15:34.277 ], 00:15:34.277 "product_name": "Malloc disk", 00:15:34.277 "block_size": 512, 00:15:34.277 "num_blocks": 65536, 00:15:34.277 "uuid": "dac039de-492e-4d12-8cbc-44dce57a4154", 00:15:34.277 "assigned_rate_limits": { 00:15:34.277 "rw_ios_per_sec": 0, 00:15:34.277 "rw_mbytes_per_sec": 0, 00:15:34.277 "r_mbytes_per_sec": 0, 00:15:34.277 "w_mbytes_per_sec": 0 00:15:34.277 }, 00:15:34.277 "claimed": true, 00:15:34.277 "claim_type": "exclusive_write", 00:15:34.277 "zoned": false, 00:15:34.277 "supported_io_types": { 00:15:34.277 "read": true, 00:15:34.277 "write": true, 00:15:34.277 "unmap": true, 00:15:34.277 "flush": true, 00:15:34.277 "reset": true, 00:15:34.277 "nvme_admin": false, 00:15:34.277 "nvme_io": false, 00:15:34.277 "nvme_io_md": false, 00:15:34.277 "write_zeroes": true, 00:15:34.277 "zcopy": true, 00:15:34.277 "get_zone_info": false, 00:15:34.277 "zone_management": false, 00:15:34.277 "zone_append": false, 00:15:34.277 "compare": false, 00:15:34.277 "compare_and_write": false, 00:15:34.277 "abort": true, 00:15:34.277 "seek_hole": false, 00:15:34.277 "seek_data": false, 00:15:34.277 "copy": true, 00:15:34.277 "nvme_iov_md": false 00:15:34.277 }, 00:15:34.277 "memory_domains": [ 00:15:34.277 { 00:15:34.277 "dma_device_id": "system", 00:15:34.277 "dma_device_type": 1 00:15:34.277 }, 00:15:34.277 { 00:15:34.277 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.277 "dma_device_type": 2 00:15:34.277 } 00:15:34.277 ], 00:15:34.277 "driver_specific": {} 00:15:34.277 } 00:15:34.277 ] 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.277 "name": "Existed_Raid", 00:15:34.277 "uuid": "451085d6-8d54-476e-83d9-34987c9af230", 00:15:34.277 "strip_size_kb": 0, 00:15:34.277 "state": "configuring", 00:15:34.277 "raid_level": "raid1", 00:15:34.277 "superblock": true, 00:15:34.277 "num_base_bdevs": 4, 00:15:34.277 "num_base_bdevs_discovered": 2, 00:15:34.277 "num_base_bdevs_operational": 4, 00:15:34.277 "base_bdevs_list": [ 00:15:34.277 { 00:15:34.277 "name": "BaseBdev1", 00:15:34.277 "uuid": "a7d96eb8-968a-4bd7-8bdc-20e1d1c0a56a", 00:15:34.277 "is_configured": true, 00:15:34.277 "data_offset": 2048, 00:15:34.277 "data_size": 63488 00:15:34.277 }, 00:15:34.277 { 00:15:34.277 "name": "BaseBdev2", 00:15:34.277 "uuid": "dac039de-492e-4d12-8cbc-44dce57a4154", 00:15:34.277 "is_configured": true, 00:15:34.277 "data_offset": 2048, 00:15:34.277 "data_size": 63488 00:15:34.277 }, 00:15:34.277 { 00:15:34.277 "name": "BaseBdev3", 00:15:34.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.277 "is_configured": false, 00:15:34.277 "data_offset": 0, 00:15:34.277 "data_size": 0 00:15:34.277 }, 00:15:34.277 { 00:15:34.277 "name": "BaseBdev4", 00:15:34.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.277 "is_configured": false, 00:15:34.277 "data_offset": 0, 00:15:34.277 "data_size": 0 00:15:34.277 } 00:15:34.277 ] 00:15:34.277 }' 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.277 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.844 [2024-11-20 07:12:31.954511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:34.844 BaseBdev3 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.844 [ 00:15:34.844 { 00:15:34.844 "name": "BaseBdev3", 00:15:34.844 "aliases": [ 00:15:34.844 "cb2061ec-1bf0-4244-b25d-2dc2859f70e1" 00:15:34.844 ], 00:15:34.844 "product_name": "Malloc disk", 00:15:34.844 "block_size": 512, 00:15:34.844 "num_blocks": 65536, 00:15:34.844 "uuid": "cb2061ec-1bf0-4244-b25d-2dc2859f70e1", 00:15:34.844 "assigned_rate_limits": { 00:15:34.844 "rw_ios_per_sec": 0, 00:15:34.844 "rw_mbytes_per_sec": 0, 00:15:34.844 "r_mbytes_per_sec": 0, 00:15:34.844 "w_mbytes_per_sec": 0 00:15:34.844 }, 00:15:34.844 "claimed": true, 00:15:34.844 "claim_type": "exclusive_write", 00:15:34.844 "zoned": false, 00:15:34.844 "supported_io_types": { 00:15:34.844 "read": true, 00:15:34.844 "write": true, 00:15:34.844 "unmap": true, 00:15:34.844 "flush": true, 00:15:34.844 "reset": true, 00:15:34.844 "nvme_admin": false, 00:15:34.844 "nvme_io": false, 00:15:34.844 "nvme_io_md": false, 00:15:34.844 "write_zeroes": true, 00:15:34.844 "zcopy": true, 00:15:34.844 "get_zone_info": false, 00:15:34.844 "zone_management": false, 00:15:34.844 "zone_append": false, 00:15:34.844 "compare": false, 00:15:34.844 "compare_and_write": false, 00:15:34.844 "abort": true, 00:15:34.844 "seek_hole": false, 00:15:34.844 "seek_data": false, 00:15:34.844 "copy": true, 00:15:34.844 "nvme_iov_md": false 00:15:34.844 }, 00:15:34.844 "memory_domains": [ 00:15:34.844 { 00:15:34.844 "dma_device_id": "system", 00:15:34.844 "dma_device_type": 1 00:15:34.844 }, 00:15:34.844 { 00:15:34.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.844 "dma_device_type": 2 00:15:34.844 } 00:15:34.844 ], 00:15:34.844 "driver_specific": {} 00:15:34.844 } 00:15:34.844 ] 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.844 07:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.844 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.844 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.844 "name": "Existed_Raid", 00:15:34.844 "uuid": "451085d6-8d54-476e-83d9-34987c9af230", 00:15:34.844 "strip_size_kb": 0, 00:15:34.844 "state": "configuring", 00:15:34.844 "raid_level": "raid1", 00:15:34.844 "superblock": true, 00:15:34.844 "num_base_bdevs": 4, 00:15:34.844 "num_base_bdevs_discovered": 3, 00:15:34.844 "num_base_bdevs_operational": 4, 00:15:34.844 "base_bdevs_list": [ 00:15:34.844 { 00:15:34.844 "name": "BaseBdev1", 00:15:34.844 "uuid": "a7d96eb8-968a-4bd7-8bdc-20e1d1c0a56a", 00:15:34.844 "is_configured": true, 00:15:34.844 "data_offset": 2048, 00:15:34.844 "data_size": 63488 00:15:34.844 }, 00:15:34.844 { 00:15:34.844 "name": "BaseBdev2", 00:15:34.844 "uuid": "dac039de-492e-4d12-8cbc-44dce57a4154", 00:15:34.844 "is_configured": true, 00:15:34.844 "data_offset": 2048, 00:15:34.844 "data_size": 63488 00:15:34.844 }, 00:15:34.844 { 00:15:34.844 "name": "BaseBdev3", 00:15:34.844 "uuid": "cb2061ec-1bf0-4244-b25d-2dc2859f70e1", 00:15:34.844 "is_configured": true, 00:15:34.845 "data_offset": 2048, 00:15:34.845 "data_size": 63488 00:15:34.845 }, 00:15:34.845 { 00:15:34.845 "name": "BaseBdev4", 00:15:34.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.845 "is_configured": false, 00:15:34.845 "data_offset": 0, 00:15:34.845 "data_size": 0 00:15:34.845 } 00:15:34.845 ] 00:15:34.845 }' 00:15:34.845 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.845 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.413 [2024-11-20 07:12:32.505676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:35.413 BaseBdev4 00:15:35.413 [2024-11-20 07:12:32.506190] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:35.413 [2024-11-20 07:12:32.506218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:35.413 [2024-11-20 07:12:32.506556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:35.413 [2024-11-20 07:12:32.506765] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:35.413 [2024-11-20 07:12:32.506786] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:35.413 [2024-11-20 07:12:32.506994] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.413 [ 00:15:35.413 { 00:15:35.413 "name": "BaseBdev4", 00:15:35.413 "aliases": [ 00:15:35.413 "24e95d0d-7454-46e3-b4af-e7831ac03934" 00:15:35.413 ], 00:15:35.413 "product_name": "Malloc disk", 00:15:35.413 "block_size": 512, 00:15:35.413 "num_blocks": 65536, 00:15:35.413 "uuid": "24e95d0d-7454-46e3-b4af-e7831ac03934", 00:15:35.413 "assigned_rate_limits": { 00:15:35.413 "rw_ios_per_sec": 0, 00:15:35.413 "rw_mbytes_per_sec": 0, 00:15:35.413 "r_mbytes_per_sec": 0, 00:15:35.413 "w_mbytes_per_sec": 0 00:15:35.413 }, 00:15:35.413 "claimed": true, 00:15:35.413 "claim_type": "exclusive_write", 00:15:35.413 "zoned": false, 00:15:35.413 "supported_io_types": { 00:15:35.413 "read": true, 00:15:35.413 "write": true, 00:15:35.413 "unmap": true, 00:15:35.413 "flush": true, 00:15:35.413 "reset": true, 00:15:35.413 "nvme_admin": false, 00:15:35.413 "nvme_io": false, 00:15:35.413 "nvme_io_md": false, 00:15:35.413 "write_zeroes": true, 00:15:35.413 "zcopy": true, 00:15:35.413 "get_zone_info": false, 00:15:35.413 "zone_management": false, 00:15:35.413 "zone_append": false, 00:15:35.413 "compare": false, 00:15:35.413 "compare_and_write": false, 00:15:35.413 "abort": true, 00:15:35.413 "seek_hole": false, 00:15:35.413 "seek_data": false, 00:15:35.413 "copy": true, 00:15:35.413 "nvme_iov_md": false 00:15:35.413 }, 00:15:35.413 "memory_domains": [ 00:15:35.413 { 00:15:35.413 "dma_device_id": "system", 00:15:35.413 "dma_device_type": 1 00:15:35.413 }, 00:15:35.413 { 00:15:35.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.413 "dma_device_type": 2 00:15:35.413 } 00:15:35.413 ], 00:15:35.413 "driver_specific": {} 00:15:35.413 } 00:15:35.413 ] 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.413 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.413 "name": "Existed_Raid", 00:15:35.413 "uuid": "451085d6-8d54-476e-83d9-34987c9af230", 00:15:35.413 "strip_size_kb": 0, 00:15:35.413 "state": "online", 00:15:35.413 "raid_level": "raid1", 00:15:35.413 "superblock": true, 00:15:35.413 "num_base_bdevs": 4, 00:15:35.414 "num_base_bdevs_discovered": 4, 00:15:35.414 "num_base_bdevs_operational": 4, 00:15:35.414 "base_bdevs_list": [ 00:15:35.414 { 00:15:35.414 "name": "BaseBdev1", 00:15:35.414 "uuid": "a7d96eb8-968a-4bd7-8bdc-20e1d1c0a56a", 00:15:35.414 "is_configured": true, 00:15:35.414 "data_offset": 2048, 00:15:35.414 "data_size": 63488 00:15:35.414 }, 00:15:35.414 { 00:15:35.414 "name": "BaseBdev2", 00:15:35.414 "uuid": "dac039de-492e-4d12-8cbc-44dce57a4154", 00:15:35.414 "is_configured": true, 00:15:35.414 "data_offset": 2048, 00:15:35.414 "data_size": 63488 00:15:35.414 }, 00:15:35.414 { 00:15:35.414 "name": "BaseBdev3", 00:15:35.414 "uuid": "cb2061ec-1bf0-4244-b25d-2dc2859f70e1", 00:15:35.414 "is_configured": true, 00:15:35.414 "data_offset": 2048, 00:15:35.414 "data_size": 63488 00:15:35.414 }, 00:15:35.414 { 00:15:35.414 "name": "BaseBdev4", 00:15:35.414 "uuid": "24e95d0d-7454-46e3-b4af-e7831ac03934", 00:15:35.414 "is_configured": true, 00:15:35.414 "data_offset": 2048, 00:15:35.414 "data_size": 63488 00:15:35.414 } 00:15:35.414 ] 00:15:35.414 }' 00:15:35.414 07:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.414 07:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.981 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:35.981 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:35.981 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:35.981 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:35.981 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:35.981 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:35.981 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:35.981 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:35.981 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.981 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.981 [2024-11-20 07:12:33.054356] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:35.981 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.981 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:35.981 "name": "Existed_Raid", 00:15:35.981 "aliases": [ 00:15:35.981 "451085d6-8d54-476e-83d9-34987c9af230" 00:15:35.981 ], 00:15:35.981 "product_name": "Raid Volume", 00:15:35.981 "block_size": 512, 00:15:35.981 "num_blocks": 63488, 00:15:35.981 "uuid": "451085d6-8d54-476e-83d9-34987c9af230", 00:15:35.981 "assigned_rate_limits": { 00:15:35.981 "rw_ios_per_sec": 0, 00:15:35.981 "rw_mbytes_per_sec": 0, 00:15:35.981 "r_mbytes_per_sec": 0, 00:15:35.981 "w_mbytes_per_sec": 0 00:15:35.981 }, 00:15:35.981 "claimed": false, 00:15:35.981 "zoned": false, 00:15:35.981 "supported_io_types": { 00:15:35.981 "read": true, 00:15:35.981 "write": true, 00:15:35.981 "unmap": false, 00:15:35.981 "flush": false, 00:15:35.981 "reset": true, 00:15:35.981 "nvme_admin": false, 00:15:35.981 "nvme_io": false, 00:15:35.981 "nvme_io_md": false, 00:15:35.981 "write_zeroes": true, 00:15:35.981 "zcopy": false, 00:15:35.981 "get_zone_info": false, 00:15:35.981 "zone_management": false, 00:15:35.981 "zone_append": false, 00:15:35.981 "compare": false, 00:15:35.981 "compare_and_write": false, 00:15:35.981 "abort": false, 00:15:35.981 "seek_hole": false, 00:15:35.981 "seek_data": false, 00:15:35.981 "copy": false, 00:15:35.981 "nvme_iov_md": false 00:15:35.981 }, 00:15:35.981 "memory_domains": [ 00:15:35.981 { 00:15:35.981 "dma_device_id": "system", 00:15:35.981 "dma_device_type": 1 00:15:35.981 }, 00:15:35.981 { 00:15:35.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.981 "dma_device_type": 2 00:15:35.981 }, 00:15:35.981 { 00:15:35.981 "dma_device_id": "system", 00:15:35.981 "dma_device_type": 1 00:15:35.981 }, 00:15:35.981 { 00:15:35.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.981 "dma_device_type": 2 00:15:35.981 }, 00:15:35.981 { 00:15:35.981 "dma_device_id": "system", 00:15:35.981 "dma_device_type": 1 00:15:35.981 }, 00:15:35.981 { 00:15:35.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.981 "dma_device_type": 2 00:15:35.981 }, 00:15:35.981 { 00:15:35.981 "dma_device_id": "system", 00:15:35.981 "dma_device_type": 1 00:15:35.981 }, 00:15:35.981 { 00:15:35.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.981 "dma_device_type": 2 00:15:35.981 } 00:15:35.981 ], 00:15:35.981 "driver_specific": { 00:15:35.981 "raid": { 00:15:35.981 "uuid": "451085d6-8d54-476e-83d9-34987c9af230", 00:15:35.981 "strip_size_kb": 0, 00:15:35.981 "state": "online", 00:15:35.981 "raid_level": "raid1", 00:15:35.981 "superblock": true, 00:15:35.981 "num_base_bdevs": 4, 00:15:35.981 "num_base_bdevs_discovered": 4, 00:15:35.981 "num_base_bdevs_operational": 4, 00:15:35.981 "base_bdevs_list": [ 00:15:35.981 { 00:15:35.981 "name": "BaseBdev1", 00:15:35.981 "uuid": "a7d96eb8-968a-4bd7-8bdc-20e1d1c0a56a", 00:15:35.981 "is_configured": true, 00:15:35.981 "data_offset": 2048, 00:15:35.981 "data_size": 63488 00:15:35.981 }, 00:15:35.981 { 00:15:35.981 "name": "BaseBdev2", 00:15:35.981 "uuid": "dac039de-492e-4d12-8cbc-44dce57a4154", 00:15:35.981 "is_configured": true, 00:15:35.982 "data_offset": 2048, 00:15:35.982 "data_size": 63488 00:15:35.982 }, 00:15:35.982 { 00:15:35.982 "name": "BaseBdev3", 00:15:35.982 "uuid": "cb2061ec-1bf0-4244-b25d-2dc2859f70e1", 00:15:35.982 "is_configured": true, 00:15:35.982 "data_offset": 2048, 00:15:35.982 "data_size": 63488 00:15:35.982 }, 00:15:35.982 { 00:15:35.982 "name": "BaseBdev4", 00:15:35.982 "uuid": "24e95d0d-7454-46e3-b4af-e7831ac03934", 00:15:35.982 "is_configured": true, 00:15:35.982 "data_offset": 2048, 00:15:35.982 "data_size": 63488 00:15:35.982 } 00:15:35.982 ] 00:15:35.982 } 00:15:35.982 } 00:15:35.982 }' 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:35.982 BaseBdev2 00:15:35.982 BaseBdev3 00:15:35.982 BaseBdev4' 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.982 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.241 [2024-11-20 07:12:33.434164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:36.241 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.242 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.501 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.501 "name": "Existed_Raid", 00:15:36.501 "uuid": "451085d6-8d54-476e-83d9-34987c9af230", 00:15:36.501 "strip_size_kb": 0, 00:15:36.501 "state": "online", 00:15:36.501 "raid_level": "raid1", 00:15:36.501 "superblock": true, 00:15:36.501 "num_base_bdevs": 4, 00:15:36.501 "num_base_bdevs_discovered": 3, 00:15:36.501 "num_base_bdevs_operational": 3, 00:15:36.501 "base_bdevs_list": [ 00:15:36.501 { 00:15:36.501 "name": null, 00:15:36.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.501 "is_configured": false, 00:15:36.501 "data_offset": 0, 00:15:36.501 "data_size": 63488 00:15:36.501 }, 00:15:36.501 { 00:15:36.501 "name": "BaseBdev2", 00:15:36.501 "uuid": "dac039de-492e-4d12-8cbc-44dce57a4154", 00:15:36.501 "is_configured": true, 00:15:36.501 "data_offset": 2048, 00:15:36.501 "data_size": 63488 00:15:36.501 }, 00:15:36.501 { 00:15:36.501 "name": "BaseBdev3", 00:15:36.501 "uuid": "cb2061ec-1bf0-4244-b25d-2dc2859f70e1", 00:15:36.501 "is_configured": true, 00:15:36.501 "data_offset": 2048, 00:15:36.501 "data_size": 63488 00:15:36.501 }, 00:15:36.501 { 00:15:36.501 "name": "BaseBdev4", 00:15:36.501 "uuid": "24e95d0d-7454-46e3-b4af-e7831ac03934", 00:15:36.501 "is_configured": true, 00:15:36.501 "data_offset": 2048, 00:15:36.501 "data_size": 63488 00:15:36.501 } 00:15:36.501 ] 00:15:36.501 }' 00:15:36.501 07:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.501 07:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.780 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:36.780 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:36.780 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.780 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:36.780 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.780 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.780 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.780 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:36.780 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:36.780 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:36.780 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.780 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.780 [2024-11-20 07:12:34.084586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.039 [2024-11-20 07:12:34.233518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.039 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.297 [2024-11-20 07:12:34.383510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:37.297 [2024-11-20 07:12:34.383795] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.297 [2024-11-20 07:12:34.467857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.297 [2024-11-20 07:12:34.468171] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.297 [2024-11-20 07:12:34.468362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.297 BaseBdev2 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.297 [ 00:15:37.297 { 00:15:37.297 "name": "BaseBdev2", 00:15:37.297 "aliases": [ 00:15:37.297 "ddd7a411-68f7-425f-94af-61685a80a45a" 00:15:37.297 ], 00:15:37.297 "product_name": "Malloc disk", 00:15:37.297 "block_size": 512, 00:15:37.297 "num_blocks": 65536, 00:15:37.297 "uuid": "ddd7a411-68f7-425f-94af-61685a80a45a", 00:15:37.297 "assigned_rate_limits": { 00:15:37.297 "rw_ios_per_sec": 0, 00:15:37.297 "rw_mbytes_per_sec": 0, 00:15:37.297 "r_mbytes_per_sec": 0, 00:15:37.297 "w_mbytes_per_sec": 0 00:15:37.297 }, 00:15:37.297 "claimed": false, 00:15:37.297 "zoned": false, 00:15:37.297 "supported_io_types": { 00:15:37.297 "read": true, 00:15:37.297 "write": true, 00:15:37.297 "unmap": true, 00:15:37.297 "flush": true, 00:15:37.297 "reset": true, 00:15:37.297 "nvme_admin": false, 00:15:37.297 "nvme_io": false, 00:15:37.297 "nvme_io_md": false, 00:15:37.297 "write_zeroes": true, 00:15:37.297 "zcopy": true, 00:15:37.297 "get_zone_info": false, 00:15:37.297 "zone_management": false, 00:15:37.297 "zone_append": false, 00:15:37.297 "compare": false, 00:15:37.297 "compare_and_write": false, 00:15:37.297 "abort": true, 00:15:37.297 "seek_hole": false, 00:15:37.297 "seek_data": false, 00:15:37.297 "copy": true, 00:15:37.297 "nvme_iov_md": false 00:15:37.297 }, 00:15:37.297 "memory_domains": [ 00:15:37.297 { 00:15:37.297 "dma_device_id": "system", 00:15:37.297 "dma_device_type": 1 00:15:37.297 }, 00:15:37.297 { 00:15:37.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.297 "dma_device_type": 2 00:15:37.297 } 00:15:37.297 ], 00:15:37.297 "driver_specific": {} 00:15:37.297 } 00:15:37.297 ] 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.297 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.556 BaseBdev3 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.556 [ 00:15:37.556 { 00:15:37.556 "name": "BaseBdev3", 00:15:37.556 "aliases": [ 00:15:37.556 "f9c814de-2347-4359-a891-d2a4980d3e9c" 00:15:37.556 ], 00:15:37.556 "product_name": "Malloc disk", 00:15:37.556 "block_size": 512, 00:15:37.556 "num_blocks": 65536, 00:15:37.556 "uuid": "f9c814de-2347-4359-a891-d2a4980d3e9c", 00:15:37.556 "assigned_rate_limits": { 00:15:37.556 "rw_ios_per_sec": 0, 00:15:37.556 "rw_mbytes_per_sec": 0, 00:15:37.556 "r_mbytes_per_sec": 0, 00:15:37.556 "w_mbytes_per_sec": 0 00:15:37.556 }, 00:15:37.556 "claimed": false, 00:15:37.556 "zoned": false, 00:15:37.556 "supported_io_types": { 00:15:37.556 "read": true, 00:15:37.556 "write": true, 00:15:37.556 "unmap": true, 00:15:37.556 "flush": true, 00:15:37.556 "reset": true, 00:15:37.556 "nvme_admin": false, 00:15:37.556 "nvme_io": false, 00:15:37.556 "nvme_io_md": false, 00:15:37.556 "write_zeroes": true, 00:15:37.556 "zcopy": true, 00:15:37.556 "get_zone_info": false, 00:15:37.556 "zone_management": false, 00:15:37.556 "zone_append": false, 00:15:37.556 "compare": false, 00:15:37.556 "compare_and_write": false, 00:15:37.556 "abort": true, 00:15:37.556 "seek_hole": false, 00:15:37.556 "seek_data": false, 00:15:37.556 "copy": true, 00:15:37.556 "nvme_iov_md": false 00:15:37.556 }, 00:15:37.556 "memory_domains": [ 00:15:37.556 { 00:15:37.556 "dma_device_id": "system", 00:15:37.556 "dma_device_type": 1 00:15:37.556 }, 00:15:37.556 { 00:15:37.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.556 "dma_device_type": 2 00:15:37.556 } 00:15:37.556 ], 00:15:37.556 "driver_specific": {} 00:15:37.556 } 00:15:37.556 ] 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.556 BaseBdev4 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.556 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.557 [ 00:15:37.557 { 00:15:37.557 "name": "BaseBdev4", 00:15:37.557 "aliases": [ 00:15:37.557 "4aef8e62-6c4d-4ebd-ba93-ee5d02502a69" 00:15:37.557 ], 00:15:37.557 "product_name": "Malloc disk", 00:15:37.557 "block_size": 512, 00:15:37.557 "num_blocks": 65536, 00:15:37.557 "uuid": "4aef8e62-6c4d-4ebd-ba93-ee5d02502a69", 00:15:37.557 "assigned_rate_limits": { 00:15:37.557 "rw_ios_per_sec": 0, 00:15:37.557 "rw_mbytes_per_sec": 0, 00:15:37.557 "r_mbytes_per_sec": 0, 00:15:37.557 "w_mbytes_per_sec": 0 00:15:37.557 }, 00:15:37.557 "claimed": false, 00:15:37.557 "zoned": false, 00:15:37.557 "supported_io_types": { 00:15:37.557 "read": true, 00:15:37.557 "write": true, 00:15:37.557 "unmap": true, 00:15:37.557 "flush": true, 00:15:37.557 "reset": true, 00:15:37.557 "nvme_admin": false, 00:15:37.557 "nvme_io": false, 00:15:37.557 "nvme_io_md": false, 00:15:37.557 "write_zeroes": true, 00:15:37.557 "zcopy": true, 00:15:37.557 "get_zone_info": false, 00:15:37.557 "zone_management": false, 00:15:37.557 "zone_append": false, 00:15:37.557 "compare": false, 00:15:37.557 "compare_and_write": false, 00:15:37.557 "abort": true, 00:15:37.557 "seek_hole": false, 00:15:37.557 "seek_data": false, 00:15:37.557 "copy": true, 00:15:37.557 "nvme_iov_md": false 00:15:37.557 }, 00:15:37.557 "memory_domains": [ 00:15:37.557 { 00:15:37.557 "dma_device_id": "system", 00:15:37.557 "dma_device_type": 1 00:15:37.557 }, 00:15:37.557 { 00:15:37.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.557 "dma_device_type": 2 00:15:37.557 } 00:15:37.557 ], 00:15:37.557 "driver_specific": {} 00:15:37.557 } 00:15:37.557 ] 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.557 [2024-11-20 07:12:34.749380] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.557 [2024-11-20 07:12:34.749567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.557 [2024-11-20 07:12:34.749692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.557 [2024-11-20 07:12:34.752068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.557 [2024-11-20 07:12:34.752133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.557 "name": "Existed_Raid", 00:15:37.557 "uuid": "fadfe7a3-8f0b-427e-8b4d-6beafb4a7af7", 00:15:37.557 "strip_size_kb": 0, 00:15:37.557 "state": "configuring", 00:15:37.557 "raid_level": "raid1", 00:15:37.557 "superblock": true, 00:15:37.557 "num_base_bdevs": 4, 00:15:37.557 "num_base_bdevs_discovered": 3, 00:15:37.557 "num_base_bdevs_operational": 4, 00:15:37.557 "base_bdevs_list": [ 00:15:37.557 { 00:15:37.557 "name": "BaseBdev1", 00:15:37.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.557 "is_configured": false, 00:15:37.557 "data_offset": 0, 00:15:37.557 "data_size": 0 00:15:37.557 }, 00:15:37.557 { 00:15:37.557 "name": "BaseBdev2", 00:15:37.557 "uuid": "ddd7a411-68f7-425f-94af-61685a80a45a", 00:15:37.557 "is_configured": true, 00:15:37.557 "data_offset": 2048, 00:15:37.557 "data_size": 63488 00:15:37.557 }, 00:15:37.557 { 00:15:37.557 "name": "BaseBdev3", 00:15:37.557 "uuid": "f9c814de-2347-4359-a891-d2a4980d3e9c", 00:15:37.557 "is_configured": true, 00:15:37.557 "data_offset": 2048, 00:15:37.557 "data_size": 63488 00:15:37.557 }, 00:15:37.557 { 00:15:37.557 "name": "BaseBdev4", 00:15:37.557 "uuid": "4aef8e62-6c4d-4ebd-ba93-ee5d02502a69", 00:15:37.557 "is_configured": true, 00:15:37.557 "data_offset": 2048, 00:15:37.557 "data_size": 63488 00:15:37.557 } 00:15:37.557 ] 00:15:37.557 }' 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.557 07:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.123 [2024-11-20 07:12:35.281589] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.123 "name": "Existed_Raid", 00:15:38.123 "uuid": "fadfe7a3-8f0b-427e-8b4d-6beafb4a7af7", 00:15:38.123 "strip_size_kb": 0, 00:15:38.123 "state": "configuring", 00:15:38.123 "raid_level": "raid1", 00:15:38.123 "superblock": true, 00:15:38.123 "num_base_bdevs": 4, 00:15:38.123 "num_base_bdevs_discovered": 2, 00:15:38.123 "num_base_bdevs_operational": 4, 00:15:38.123 "base_bdevs_list": [ 00:15:38.123 { 00:15:38.123 "name": "BaseBdev1", 00:15:38.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.123 "is_configured": false, 00:15:38.123 "data_offset": 0, 00:15:38.123 "data_size": 0 00:15:38.123 }, 00:15:38.123 { 00:15:38.123 "name": null, 00:15:38.123 "uuid": "ddd7a411-68f7-425f-94af-61685a80a45a", 00:15:38.123 "is_configured": false, 00:15:38.123 "data_offset": 0, 00:15:38.123 "data_size": 63488 00:15:38.123 }, 00:15:38.123 { 00:15:38.123 "name": "BaseBdev3", 00:15:38.123 "uuid": "f9c814de-2347-4359-a891-d2a4980d3e9c", 00:15:38.123 "is_configured": true, 00:15:38.123 "data_offset": 2048, 00:15:38.123 "data_size": 63488 00:15:38.123 }, 00:15:38.123 { 00:15:38.123 "name": "BaseBdev4", 00:15:38.123 "uuid": "4aef8e62-6c4d-4ebd-ba93-ee5d02502a69", 00:15:38.123 "is_configured": true, 00:15:38.123 "data_offset": 2048, 00:15:38.123 "data_size": 63488 00:15:38.123 } 00:15:38.123 ] 00:15:38.123 }' 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.123 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.689 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:38.689 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.689 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.689 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.689 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.689 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:38.689 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.690 [2024-11-20 07:12:35.895489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.690 BaseBdev1 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.690 [ 00:15:38.690 { 00:15:38.690 "name": "BaseBdev1", 00:15:38.690 "aliases": [ 00:15:38.690 "f93de783-b7e8-4aee-bf19-694b65a56876" 00:15:38.690 ], 00:15:38.690 "product_name": "Malloc disk", 00:15:38.690 "block_size": 512, 00:15:38.690 "num_blocks": 65536, 00:15:38.690 "uuid": "f93de783-b7e8-4aee-bf19-694b65a56876", 00:15:38.690 "assigned_rate_limits": { 00:15:38.690 "rw_ios_per_sec": 0, 00:15:38.690 "rw_mbytes_per_sec": 0, 00:15:38.690 "r_mbytes_per_sec": 0, 00:15:38.690 "w_mbytes_per_sec": 0 00:15:38.690 }, 00:15:38.690 "claimed": true, 00:15:38.690 "claim_type": "exclusive_write", 00:15:38.690 "zoned": false, 00:15:38.690 "supported_io_types": { 00:15:38.690 "read": true, 00:15:38.690 "write": true, 00:15:38.690 "unmap": true, 00:15:38.690 "flush": true, 00:15:38.690 "reset": true, 00:15:38.690 "nvme_admin": false, 00:15:38.690 "nvme_io": false, 00:15:38.690 "nvme_io_md": false, 00:15:38.690 "write_zeroes": true, 00:15:38.690 "zcopy": true, 00:15:38.690 "get_zone_info": false, 00:15:38.690 "zone_management": false, 00:15:38.690 "zone_append": false, 00:15:38.690 "compare": false, 00:15:38.690 "compare_and_write": false, 00:15:38.690 "abort": true, 00:15:38.690 "seek_hole": false, 00:15:38.690 "seek_data": false, 00:15:38.690 "copy": true, 00:15:38.690 "nvme_iov_md": false 00:15:38.690 }, 00:15:38.690 "memory_domains": [ 00:15:38.690 { 00:15:38.690 "dma_device_id": "system", 00:15:38.690 "dma_device_type": 1 00:15:38.690 }, 00:15:38.690 { 00:15:38.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.690 "dma_device_type": 2 00:15:38.690 } 00:15:38.690 ], 00:15:38.690 "driver_specific": {} 00:15:38.690 } 00:15:38.690 ] 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.690 "name": "Existed_Raid", 00:15:38.690 "uuid": "fadfe7a3-8f0b-427e-8b4d-6beafb4a7af7", 00:15:38.690 "strip_size_kb": 0, 00:15:38.690 "state": "configuring", 00:15:38.690 "raid_level": "raid1", 00:15:38.690 "superblock": true, 00:15:38.690 "num_base_bdevs": 4, 00:15:38.690 "num_base_bdevs_discovered": 3, 00:15:38.690 "num_base_bdevs_operational": 4, 00:15:38.690 "base_bdevs_list": [ 00:15:38.690 { 00:15:38.690 "name": "BaseBdev1", 00:15:38.690 "uuid": "f93de783-b7e8-4aee-bf19-694b65a56876", 00:15:38.690 "is_configured": true, 00:15:38.690 "data_offset": 2048, 00:15:38.690 "data_size": 63488 00:15:38.690 }, 00:15:38.690 { 00:15:38.690 "name": null, 00:15:38.690 "uuid": "ddd7a411-68f7-425f-94af-61685a80a45a", 00:15:38.690 "is_configured": false, 00:15:38.690 "data_offset": 0, 00:15:38.690 "data_size": 63488 00:15:38.690 }, 00:15:38.690 { 00:15:38.690 "name": "BaseBdev3", 00:15:38.690 "uuid": "f9c814de-2347-4359-a891-d2a4980d3e9c", 00:15:38.690 "is_configured": true, 00:15:38.690 "data_offset": 2048, 00:15:38.690 "data_size": 63488 00:15:38.690 }, 00:15:38.690 { 00:15:38.690 "name": "BaseBdev4", 00:15:38.690 "uuid": "4aef8e62-6c4d-4ebd-ba93-ee5d02502a69", 00:15:38.690 "is_configured": true, 00:15:38.690 "data_offset": 2048, 00:15:38.690 "data_size": 63488 00:15:38.690 } 00:15:38.690 ] 00:15:38.690 }' 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.690 07:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.257 [2024-11-20 07:12:36.495756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.257 "name": "Existed_Raid", 00:15:39.257 "uuid": "fadfe7a3-8f0b-427e-8b4d-6beafb4a7af7", 00:15:39.257 "strip_size_kb": 0, 00:15:39.257 "state": "configuring", 00:15:39.257 "raid_level": "raid1", 00:15:39.257 "superblock": true, 00:15:39.257 "num_base_bdevs": 4, 00:15:39.257 "num_base_bdevs_discovered": 2, 00:15:39.257 "num_base_bdevs_operational": 4, 00:15:39.257 "base_bdevs_list": [ 00:15:39.257 { 00:15:39.257 "name": "BaseBdev1", 00:15:39.257 "uuid": "f93de783-b7e8-4aee-bf19-694b65a56876", 00:15:39.257 "is_configured": true, 00:15:39.257 "data_offset": 2048, 00:15:39.257 "data_size": 63488 00:15:39.257 }, 00:15:39.257 { 00:15:39.257 "name": null, 00:15:39.257 "uuid": "ddd7a411-68f7-425f-94af-61685a80a45a", 00:15:39.257 "is_configured": false, 00:15:39.257 "data_offset": 0, 00:15:39.257 "data_size": 63488 00:15:39.257 }, 00:15:39.257 { 00:15:39.257 "name": null, 00:15:39.257 "uuid": "f9c814de-2347-4359-a891-d2a4980d3e9c", 00:15:39.257 "is_configured": false, 00:15:39.257 "data_offset": 0, 00:15:39.257 "data_size": 63488 00:15:39.257 }, 00:15:39.257 { 00:15:39.257 "name": "BaseBdev4", 00:15:39.257 "uuid": "4aef8e62-6c4d-4ebd-ba93-ee5d02502a69", 00:15:39.257 "is_configured": true, 00:15:39.257 "data_offset": 2048, 00:15:39.257 "data_size": 63488 00:15:39.257 } 00:15:39.257 ] 00:15:39.257 }' 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.257 07:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.825 [2024-11-20 07:12:37.071878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.825 "name": "Existed_Raid", 00:15:39.825 "uuid": "fadfe7a3-8f0b-427e-8b4d-6beafb4a7af7", 00:15:39.825 "strip_size_kb": 0, 00:15:39.825 "state": "configuring", 00:15:39.825 "raid_level": "raid1", 00:15:39.825 "superblock": true, 00:15:39.825 "num_base_bdevs": 4, 00:15:39.825 "num_base_bdevs_discovered": 3, 00:15:39.825 "num_base_bdevs_operational": 4, 00:15:39.825 "base_bdevs_list": [ 00:15:39.825 { 00:15:39.825 "name": "BaseBdev1", 00:15:39.825 "uuid": "f93de783-b7e8-4aee-bf19-694b65a56876", 00:15:39.825 "is_configured": true, 00:15:39.825 "data_offset": 2048, 00:15:39.825 "data_size": 63488 00:15:39.825 }, 00:15:39.825 { 00:15:39.825 "name": null, 00:15:39.825 "uuid": "ddd7a411-68f7-425f-94af-61685a80a45a", 00:15:39.825 "is_configured": false, 00:15:39.825 "data_offset": 0, 00:15:39.825 "data_size": 63488 00:15:39.825 }, 00:15:39.825 { 00:15:39.825 "name": "BaseBdev3", 00:15:39.825 "uuid": "f9c814de-2347-4359-a891-d2a4980d3e9c", 00:15:39.825 "is_configured": true, 00:15:39.825 "data_offset": 2048, 00:15:39.825 "data_size": 63488 00:15:39.825 }, 00:15:39.825 { 00:15:39.825 "name": "BaseBdev4", 00:15:39.825 "uuid": "4aef8e62-6c4d-4ebd-ba93-ee5d02502a69", 00:15:39.825 "is_configured": true, 00:15:39.825 "data_offset": 2048, 00:15:39.825 "data_size": 63488 00:15:39.825 } 00:15:39.825 ] 00:15:39.825 }' 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.825 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.394 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.394 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:40.394 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.394 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.394 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.394 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:40.394 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:40.394 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.394 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.394 [2024-11-20 07:12:37.648121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.658 "name": "Existed_Raid", 00:15:40.658 "uuid": "fadfe7a3-8f0b-427e-8b4d-6beafb4a7af7", 00:15:40.658 "strip_size_kb": 0, 00:15:40.658 "state": "configuring", 00:15:40.658 "raid_level": "raid1", 00:15:40.658 "superblock": true, 00:15:40.658 "num_base_bdevs": 4, 00:15:40.658 "num_base_bdevs_discovered": 2, 00:15:40.658 "num_base_bdevs_operational": 4, 00:15:40.658 "base_bdevs_list": [ 00:15:40.658 { 00:15:40.658 "name": null, 00:15:40.658 "uuid": "f93de783-b7e8-4aee-bf19-694b65a56876", 00:15:40.658 "is_configured": false, 00:15:40.658 "data_offset": 0, 00:15:40.658 "data_size": 63488 00:15:40.658 }, 00:15:40.658 { 00:15:40.658 "name": null, 00:15:40.658 "uuid": "ddd7a411-68f7-425f-94af-61685a80a45a", 00:15:40.658 "is_configured": false, 00:15:40.658 "data_offset": 0, 00:15:40.658 "data_size": 63488 00:15:40.658 }, 00:15:40.658 { 00:15:40.658 "name": "BaseBdev3", 00:15:40.658 "uuid": "f9c814de-2347-4359-a891-d2a4980d3e9c", 00:15:40.658 "is_configured": true, 00:15:40.658 "data_offset": 2048, 00:15:40.658 "data_size": 63488 00:15:40.658 }, 00:15:40.658 { 00:15:40.658 "name": "BaseBdev4", 00:15:40.658 "uuid": "4aef8e62-6c4d-4ebd-ba93-ee5d02502a69", 00:15:40.658 "is_configured": true, 00:15:40.658 "data_offset": 2048, 00:15:40.658 "data_size": 63488 00:15:40.658 } 00:15:40.658 ] 00:15:40.658 }' 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.658 07:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.224 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.225 [2024-11-20 07:12:38.294413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.225 "name": "Existed_Raid", 00:15:41.225 "uuid": "fadfe7a3-8f0b-427e-8b4d-6beafb4a7af7", 00:15:41.225 "strip_size_kb": 0, 00:15:41.225 "state": "configuring", 00:15:41.225 "raid_level": "raid1", 00:15:41.225 "superblock": true, 00:15:41.225 "num_base_bdevs": 4, 00:15:41.225 "num_base_bdevs_discovered": 3, 00:15:41.225 "num_base_bdevs_operational": 4, 00:15:41.225 "base_bdevs_list": [ 00:15:41.225 { 00:15:41.225 "name": null, 00:15:41.225 "uuid": "f93de783-b7e8-4aee-bf19-694b65a56876", 00:15:41.225 "is_configured": false, 00:15:41.225 "data_offset": 0, 00:15:41.225 "data_size": 63488 00:15:41.225 }, 00:15:41.225 { 00:15:41.225 "name": "BaseBdev2", 00:15:41.225 "uuid": "ddd7a411-68f7-425f-94af-61685a80a45a", 00:15:41.225 "is_configured": true, 00:15:41.225 "data_offset": 2048, 00:15:41.225 "data_size": 63488 00:15:41.225 }, 00:15:41.225 { 00:15:41.225 "name": "BaseBdev3", 00:15:41.225 "uuid": "f9c814de-2347-4359-a891-d2a4980d3e9c", 00:15:41.225 "is_configured": true, 00:15:41.225 "data_offset": 2048, 00:15:41.225 "data_size": 63488 00:15:41.225 }, 00:15:41.225 { 00:15:41.225 "name": "BaseBdev4", 00:15:41.225 "uuid": "4aef8e62-6c4d-4ebd-ba93-ee5d02502a69", 00:15:41.225 "is_configured": true, 00:15:41.225 "data_offset": 2048, 00:15:41.225 "data_size": 63488 00:15:41.225 } 00:15:41.225 ] 00:15:41.225 }' 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.225 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f93de783-b7e8-4aee-bf19-694b65a56876 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.792 [2024-11-20 07:12:38.952961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:41.792 NewBaseBdev 00:15:41.792 [2024-11-20 07:12:38.953479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:41.792 [2024-11-20 07:12:38.953520] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:41.792 [2024-11-20 07:12:38.953844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:41.792 [2024-11-20 07:12:38.954076] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:41.792 [2024-11-20 07:12:38.954093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:41.792 [2024-11-20 07:12:38.954253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:41.792 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.793 [ 00:15:41.793 { 00:15:41.793 "name": "NewBaseBdev", 00:15:41.793 "aliases": [ 00:15:41.793 "f93de783-b7e8-4aee-bf19-694b65a56876" 00:15:41.793 ], 00:15:41.793 "product_name": "Malloc disk", 00:15:41.793 "block_size": 512, 00:15:41.793 "num_blocks": 65536, 00:15:41.793 "uuid": "f93de783-b7e8-4aee-bf19-694b65a56876", 00:15:41.793 "assigned_rate_limits": { 00:15:41.793 "rw_ios_per_sec": 0, 00:15:41.793 "rw_mbytes_per_sec": 0, 00:15:41.793 "r_mbytes_per_sec": 0, 00:15:41.793 "w_mbytes_per_sec": 0 00:15:41.793 }, 00:15:41.793 "claimed": true, 00:15:41.793 "claim_type": "exclusive_write", 00:15:41.793 "zoned": false, 00:15:41.793 "supported_io_types": { 00:15:41.793 "read": true, 00:15:41.793 "write": true, 00:15:41.793 "unmap": true, 00:15:41.793 "flush": true, 00:15:41.793 "reset": true, 00:15:41.793 "nvme_admin": false, 00:15:41.793 "nvme_io": false, 00:15:41.793 "nvme_io_md": false, 00:15:41.793 "write_zeroes": true, 00:15:41.793 "zcopy": true, 00:15:41.793 "get_zone_info": false, 00:15:41.793 "zone_management": false, 00:15:41.793 "zone_append": false, 00:15:41.793 "compare": false, 00:15:41.793 "compare_and_write": false, 00:15:41.793 "abort": true, 00:15:41.793 "seek_hole": false, 00:15:41.793 "seek_data": false, 00:15:41.793 "copy": true, 00:15:41.793 "nvme_iov_md": false 00:15:41.793 }, 00:15:41.793 "memory_domains": [ 00:15:41.793 { 00:15:41.793 "dma_device_id": "system", 00:15:41.793 "dma_device_type": 1 00:15:41.793 }, 00:15:41.793 { 00:15:41.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.793 "dma_device_type": 2 00:15:41.793 } 00:15:41.793 ], 00:15:41.793 "driver_specific": {} 00:15:41.793 } 00:15:41.793 ] 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.793 07:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.793 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.793 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.793 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.793 "name": "Existed_Raid", 00:15:41.793 "uuid": "fadfe7a3-8f0b-427e-8b4d-6beafb4a7af7", 00:15:41.793 "strip_size_kb": 0, 00:15:41.793 "state": "online", 00:15:41.793 "raid_level": "raid1", 00:15:41.793 "superblock": true, 00:15:41.793 "num_base_bdevs": 4, 00:15:41.793 "num_base_bdevs_discovered": 4, 00:15:41.793 "num_base_bdevs_operational": 4, 00:15:41.793 "base_bdevs_list": [ 00:15:41.793 { 00:15:41.793 "name": "NewBaseBdev", 00:15:41.793 "uuid": "f93de783-b7e8-4aee-bf19-694b65a56876", 00:15:41.793 "is_configured": true, 00:15:41.793 "data_offset": 2048, 00:15:41.793 "data_size": 63488 00:15:41.793 }, 00:15:41.793 { 00:15:41.793 "name": "BaseBdev2", 00:15:41.793 "uuid": "ddd7a411-68f7-425f-94af-61685a80a45a", 00:15:41.793 "is_configured": true, 00:15:41.793 "data_offset": 2048, 00:15:41.793 "data_size": 63488 00:15:41.793 }, 00:15:41.793 { 00:15:41.793 "name": "BaseBdev3", 00:15:41.793 "uuid": "f9c814de-2347-4359-a891-d2a4980d3e9c", 00:15:41.793 "is_configured": true, 00:15:41.793 "data_offset": 2048, 00:15:41.793 "data_size": 63488 00:15:41.793 }, 00:15:41.793 { 00:15:41.793 "name": "BaseBdev4", 00:15:41.793 "uuid": "4aef8e62-6c4d-4ebd-ba93-ee5d02502a69", 00:15:41.793 "is_configured": true, 00:15:41.793 "data_offset": 2048, 00:15:41.793 "data_size": 63488 00:15:41.793 } 00:15:41.793 ] 00:15:41.793 }' 00:15:41.793 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.793 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.362 [2024-11-20 07:12:39.517642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:42.362 "name": "Existed_Raid", 00:15:42.362 "aliases": [ 00:15:42.362 "fadfe7a3-8f0b-427e-8b4d-6beafb4a7af7" 00:15:42.362 ], 00:15:42.362 "product_name": "Raid Volume", 00:15:42.362 "block_size": 512, 00:15:42.362 "num_blocks": 63488, 00:15:42.362 "uuid": "fadfe7a3-8f0b-427e-8b4d-6beafb4a7af7", 00:15:42.362 "assigned_rate_limits": { 00:15:42.362 "rw_ios_per_sec": 0, 00:15:42.362 "rw_mbytes_per_sec": 0, 00:15:42.362 "r_mbytes_per_sec": 0, 00:15:42.362 "w_mbytes_per_sec": 0 00:15:42.362 }, 00:15:42.362 "claimed": false, 00:15:42.362 "zoned": false, 00:15:42.362 "supported_io_types": { 00:15:42.362 "read": true, 00:15:42.362 "write": true, 00:15:42.362 "unmap": false, 00:15:42.362 "flush": false, 00:15:42.362 "reset": true, 00:15:42.362 "nvme_admin": false, 00:15:42.362 "nvme_io": false, 00:15:42.362 "nvme_io_md": false, 00:15:42.362 "write_zeroes": true, 00:15:42.362 "zcopy": false, 00:15:42.362 "get_zone_info": false, 00:15:42.362 "zone_management": false, 00:15:42.362 "zone_append": false, 00:15:42.362 "compare": false, 00:15:42.362 "compare_and_write": false, 00:15:42.362 "abort": false, 00:15:42.362 "seek_hole": false, 00:15:42.362 "seek_data": false, 00:15:42.362 "copy": false, 00:15:42.362 "nvme_iov_md": false 00:15:42.362 }, 00:15:42.362 "memory_domains": [ 00:15:42.362 { 00:15:42.362 "dma_device_id": "system", 00:15:42.362 "dma_device_type": 1 00:15:42.362 }, 00:15:42.362 { 00:15:42.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.362 "dma_device_type": 2 00:15:42.362 }, 00:15:42.362 { 00:15:42.362 "dma_device_id": "system", 00:15:42.362 "dma_device_type": 1 00:15:42.362 }, 00:15:42.362 { 00:15:42.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.362 "dma_device_type": 2 00:15:42.362 }, 00:15:42.362 { 00:15:42.362 "dma_device_id": "system", 00:15:42.362 "dma_device_type": 1 00:15:42.362 }, 00:15:42.362 { 00:15:42.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.362 "dma_device_type": 2 00:15:42.362 }, 00:15:42.362 { 00:15:42.362 "dma_device_id": "system", 00:15:42.362 "dma_device_type": 1 00:15:42.362 }, 00:15:42.362 { 00:15:42.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.362 "dma_device_type": 2 00:15:42.362 } 00:15:42.362 ], 00:15:42.362 "driver_specific": { 00:15:42.362 "raid": { 00:15:42.362 "uuid": "fadfe7a3-8f0b-427e-8b4d-6beafb4a7af7", 00:15:42.362 "strip_size_kb": 0, 00:15:42.362 "state": "online", 00:15:42.362 "raid_level": "raid1", 00:15:42.362 "superblock": true, 00:15:42.362 "num_base_bdevs": 4, 00:15:42.362 "num_base_bdevs_discovered": 4, 00:15:42.362 "num_base_bdevs_operational": 4, 00:15:42.362 "base_bdevs_list": [ 00:15:42.362 { 00:15:42.362 "name": "NewBaseBdev", 00:15:42.362 "uuid": "f93de783-b7e8-4aee-bf19-694b65a56876", 00:15:42.362 "is_configured": true, 00:15:42.362 "data_offset": 2048, 00:15:42.362 "data_size": 63488 00:15:42.362 }, 00:15:42.362 { 00:15:42.362 "name": "BaseBdev2", 00:15:42.362 "uuid": "ddd7a411-68f7-425f-94af-61685a80a45a", 00:15:42.362 "is_configured": true, 00:15:42.362 "data_offset": 2048, 00:15:42.362 "data_size": 63488 00:15:42.362 }, 00:15:42.362 { 00:15:42.362 "name": "BaseBdev3", 00:15:42.362 "uuid": "f9c814de-2347-4359-a891-d2a4980d3e9c", 00:15:42.362 "is_configured": true, 00:15:42.362 "data_offset": 2048, 00:15:42.362 "data_size": 63488 00:15:42.362 }, 00:15:42.362 { 00:15:42.362 "name": "BaseBdev4", 00:15:42.362 "uuid": "4aef8e62-6c4d-4ebd-ba93-ee5d02502a69", 00:15:42.362 "is_configured": true, 00:15:42.362 "data_offset": 2048, 00:15:42.362 "data_size": 63488 00:15:42.362 } 00:15:42.362 ] 00:15:42.362 } 00:15:42.362 } 00:15:42.362 }' 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:42.362 BaseBdev2 00:15:42.362 BaseBdev3 00:15:42.362 BaseBdev4' 00:15:42.362 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.620 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:42.620 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.620 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:42.620 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.620 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.620 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.621 [2024-11-20 07:12:39.897291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.621 [2024-11-20 07:12:39.897445] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:42.621 [2024-11-20 07:12:39.897690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:42.621 [2024-11-20 07:12:39.898213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:42.621 [2024-11-20 07:12:39.898250] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73923 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73923 ']' 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73923 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:42.621 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73923 00:15:42.879 killing process with pid 73923 00:15:42.879 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:42.879 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:42.879 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73923' 00:15:42.879 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73923 00:15:42.879 [2024-11-20 07:12:39.942016] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:42.879 07:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73923 00:15:43.137 [2024-11-20 07:12:40.299239] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:44.073 ************************************ 00:15:44.073 END TEST raid_state_function_test_sb 00:15:44.073 ************************************ 00:15:44.073 07:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:44.073 00:15:44.073 real 0m12.742s 00:15:44.073 user 0m21.125s 00:15:44.073 sys 0m1.805s 00:15:44.073 07:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:44.073 07:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.073 07:12:41 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:15:44.073 07:12:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:44.073 07:12:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:44.073 07:12:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:44.073 ************************************ 00:15:44.073 START TEST raid_superblock_test 00:15:44.073 ************************************ 00:15:44.073 07:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:15:44.074 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:44.074 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:44.074 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:44.074 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:44.074 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74605 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74605 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74605 ']' 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:44.332 07:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.332 [2024-11-20 07:12:41.483537] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:15:44.332 [2024-11-20 07:12:41.483837] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74605 ] 00:15:44.591 [2024-11-20 07:12:41.680287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.591 [2024-11-20 07:12:41.836787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.851 [2024-11-20 07:12:42.055448] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:44.851 [2024-11-20 07:12:42.055544] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.419 malloc1 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.419 [2024-11-20 07:12:42.504598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:45.419 [2024-11-20 07:12:42.504825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.419 [2024-11-20 07:12:42.505003] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:45.419 [2024-11-20 07:12:42.505129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.419 [2024-11-20 07:12:42.507947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.419 [2024-11-20 07:12:42.508115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:45.419 pt1 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.419 malloc2 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.419 [2024-11-20 07:12:42.560771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:45.419 [2024-11-20 07:12:42.560996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.419 [2024-11-20 07:12:42.561072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:45.419 [2024-11-20 07:12:42.561189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.419 [2024-11-20 07:12:42.564030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.419 [2024-11-20 07:12:42.564194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:45.419 pt2 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.419 malloc3 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.419 [2024-11-20 07:12:42.631193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:45.419 [2024-11-20 07:12:42.631380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.419 [2024-11-20 07:12:42.631433] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:45.419 [2024-11-20 07:12:42.631450] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.419 [2024-11-20 07:12:42.634207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.419 [2024-11-20 07:12:42.634252] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:45.419 pt3 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.419 malloc4 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.419 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.419 [2024-11-20 07:12:42.683051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:45.419 [2024-11-20 07:12:42.683125] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.419 [2024-11-20 07:12:42.683154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:45.419 [2024-11-20 07:12:42.683170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.419 [2024-11-20 07:12:42.685859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.419 [2024-11-20 07:12:42.685921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:45.419 pt4 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.420 [2024-11-20 07:12:42.691082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:45.420 [2024-11-20 07:12:42.693584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:45.420 [2024-11-20 07:12:42.693798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:45.420 [2024-11-20 07:12:42.693988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:45.420 [2024-11-20 07:12:42.694341] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:45.420 [2024-11-20 07:12:42.694467] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:45.420 [2024-11-20 07:12:42.694827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:45.420 [2024-11-20 07:12:42.695073] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:45.420 [2024-11-20 07:12:42.695097] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:45.420 [2024-11-20 07:12:42.695322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.420 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.678 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.678 "name": "raid_bdev1", 00:15:45.678 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:45.678 "strip_size_kb": 0, 00:15:45.678 "state": "online", 00:15:45.678 "raid_level": "raid1", 00:15:45.678 "superblock": true, 00:15:45.678 "num_base_bdevs": 4, 00:15:45.678 "num_base_bdevs_discovered": 4, 00:15:45.678 "num_base_bdevs_operational": 4, 00:15:45.678 "base_bdevs_list": [ 00:15:45.678 { 00:15:45.678 "name": "pt1", 00:15:45.678 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.678 "is_configured": true, 00:15:45.678 "data_offset": 2048, 00:15:45.678 "data_size": 63488 00:15:45.678 }, 00:15:45.678 { 00:15:45.678 "name": "pt2", 00:15:45.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.678 "is_configured": true, 00:15:45.678 "data_offset": 2048, 00:15:45.678 "data_size": 63488 00:15:45.678 }, 00:15:45.678 { 00:15:45.678 "name": "pt3", 00:15:45.678 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.678 "is_configured": true, 00:15:45.678 "data_offset": 2048, 00:15:45.678 "data_size": 63488 00:15:45.678 }, 00:15:45.678 { 00:15:45.678 "name": "pt4", 00:15:45.678 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:45.678 "is_configured": true, 00:15:45.678 "data_offset": 2048, 00:15:45.678 "data_size": 63488 00:15:45.678 } 00:15:45.678 ] 00:15:45.678 }' 00:15:45.678 07:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.678 07:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.937 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:45.937 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:45.937 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.937 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.937 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.937 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.937 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:45.937 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.937 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.937 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.937 [2024-11-20 07:12:43.183828] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.937 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.937 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.937 "name": "raid_bdev1", 00:15:45.937 "aliases": [ 00:15:45.937 "3e00bcb8-8808-41f4-9b2d-f150ece3de7d" 00:15:45.937 ], 00:15:45.937 "product_name": "Raid Volume", 00:15:45.937 "block_size": 512, 00:15:45.937 "num_blocks": 63488, 00:15:45.937 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:45.937 "assigned_rate_limits": { 00:15:45.937 "rw_ios_per_sec": 0, 00:15:45.937 "rw_mbytes_per_sec": 0, 00:15:45.937 "r_mbytes_per_sec": 0, 00:15:45.937 "w_mbytes_per_sec": 0 00:15:45.937 }, 00:15:45.937 "claimed": false, 00:15:45.937 "zoned": false, 00:15:45.937 "supported_io_types": { 00:15:45.937 "read": true, 00:15:45.937 "write": true, 00:15:45.937 "unmap": false, 00:15:45.937 "flush": false, 00:15:45.937 "reset": true, 00:15:45.937 "nvme_admin": false, 00:15:45.937 "nvme_io": false, 00:15:45.937 "nvme_io_md": false, 00:15:45.937 "write_zeroes": true, 00:15:45.937 "zcopy": false, 00:15:45.937 "get_zone_info": false, 00:15:45.937 "zone_management": false, 00:15:45.937 "zone_append": false, 00:15:45.937 "compare": false, 00:15:45.937 "compare_and_write": false, 00:15:45.937 "abort": false, 00:15:45.937 "seek_hole": false, 00:15:45.937 "seek_data": false, 00:15:45.937 "copy": false, 00:15:45.937 "nvme_iov_md": false 00:15:45.937 }, 00:15:45.937 "memory_domains": [ 00:15:45.937 { 00:15:45.937 "dma_device_id": "system", 00:15:45.937 "dma_device_type": 1 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.937 "dma_device_type": 2 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "dma_device_id": "system", 00:15:45.937 "dma_device_type": 1 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.937 "dma_device_type": 2 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "dma_device_id": "system", 00:15:45.937 "dma_device_type": 1 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.937 "dma_device_type": 2 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "dma_device_id": "system", 00:15:45.937 "dma_device_type": 1 00:15:45.937 }, 00:15:45.937 { 00:15:45.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.937 "dma_device_type": 2 00:15:45.937 } 00:15:45.937 ], 00:15:45.937 "driver_specific": { 00:15:45.937 "raid": { 00:15:45.937 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:45.937 "strip_size_kb": 0, 00:15:45.937 "state": "online", 00:15:45.937 "raid_level": "raid1", 00:15:45.937 "superblock": true, 00:15:45.937 "num_base_bdevs": 4, 00:15:45.937 "num_base_bdevs_discovered": 4, 00:15:45.937 "num_base_bdevs_operational": 4, 00:15:45.937 "base_bdevs_list": [ 00:15:45.937 { 00:15:45.937 "name": "pt1", 00:15:45.937 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.937 "is_configured": true, 00:15:45.937 "data_offset": 2048, 00:15:45.938 "data_size": 63488 00:15:45.938 }, 00:15:45.938 { 00:15:45.938 "name": "pt2", 00:15:45.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.938 "is_configured": true, 00:15:45.938 "data_offset": 2048, 00:15:45.938 "data_size": 63488 00:15:45.938 }, 00:15:45.938 { 00:15:45.938 "name": "pt3", 00:15:45.938 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.938 "is_configured": true, 00:15:45.938 "data_offset": 2048, 00:15:45.938 "data_size": 63488 00:15:45.938 }, 00:15:45.938 { 00:15:45.938 "name": "pt4", 00:15:45.938 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:45.938 "is_configured": true, 00:15:45.938 "data_offset": 2048, 00:15:45.938 "data_size": 63488 00:15:45.938 } 00:15:45.938 ] 00:15:45.938 } 00:15:45.938 } 00:15:45.938 }' 00:15:45.938 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:46.196 pt2 00:15:46.196 pt3 00:15:46.196 pt4' 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.196 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.455 [2024-11-20 07:12:43.535862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3e00bcb8-8808-41f4-9b2d-f150ece3de7d 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3e00bcb8-8808-41f4-9b2d-f150ece3de7d ']' 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.455 [2024-11-20 07:12:43.583529] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.455 [2024-11-20 07:12:43.583668] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.455 [2024-11-20 07:12:43.583911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.455 [2024-11-20 07:12:43.584138] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.455 [2024-11-20 07:12:43.584277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.455 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 [2024-11-20 07:12:43.731552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:46.456 [2024-11-20 07:12:43.734226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:46.456 [2024-11-20 07:12:43.734314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:46.456 [2024-11-20 07:12:43.734366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:46.456 [2024-11-20 07:12:43.734434] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:46.456 [2024-11-20 07:12:43.734523] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:46.456 [2024-11-20 07:12:43.734560] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:46.456 [2024-11-20 07:12:43.734595] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:46.456 [2024-11-20 07:12:43.734618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.456 [2024-11-20 07:12:43.734636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:46.456 request: 00:15:46.456 { 00:15:46.456 "name": "raid_bdev1", 00:15:46.456 "raid_level": "raid1", 00:15:46.456 "base_bdevs": [ 00:15:46.456 "malloc1", 00:15:46.456 "malloc2", 00:15:46.456 "malloc3", 00:15:46.456 "malloc4" 00:15:46.456 ], 00:15:46.456 "superblock": false, 00:15:46.456 "method": "bdev_raid_create", 00:15:46.456 "req_id": 1 00:15:46.456 } 00:15:46.456 Got JSON-RPC error response 00:15:46.456 response: 00:15:46.456 { 00:15:46.456 "code": -17, 00:15:46.456 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:46.456 } 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:46.456 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.715 [2024-11-20 07:12:43.799565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:46.715 [2024-11-20 07:12:43.799761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.715 [2024-11-20 07:12:43.799927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:46.715 [2024-11-20 07:12:43.800056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.715 [2024-11-20 07:12:43.803010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.715 [2024-11-20 07:12:43.803169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:46.715 [2024-11-20 07:12:43.803395] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:46.715 [2024-11-20 07:12:43.803565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:46.715 pt1 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.715 "name": "raid_bdev1", 00:15:46.715 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:46.715 "strip_size_kb": 0, 00:15:46.715 "state": "configuring", 00:15:46.715 "raid_level": "raid1", 00:15:46.715 "superblock": true, 00:15:46.715 "num_base_bdevs": 4, 00:15:46.715 "num_base_bdevs_discovered": 1, 00:15:46.715 "num_base_bdevs_operational": 4, 00:15:46.715 "base_bdevs_list": [ 00:15:46.715 { 00:15:46.715 "name": "pt1", 00:15:46.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:46.715 "is_configured": true, 00:15:46.715 "data_offset": 2048, 00:15:46.715 "data_size": 63488 00:15:46.715 }, 00:15:46.715 { 00:15:46.715 "name": null, 00:15:46.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.715 "is_configured": false, 00:15:46.715 "data_offset": 2048, 00:15:46.715 "data_size": 63488 00:15:46.715 }, 00:15:46.715 { 00:15:46.715 "name": null, 00:15:46.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.715 "is_configured": false, 00:15:46.715 "data_offset": 2048, 00:15:46.715 "data_size": 63488 00:15:46.715 }, 00:15:46.715 { 00:15:46.715 "name": null, 00:15:46.715 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:46.715 "is_configured": false, 00:15:46.715 "data_offset": 2048, 00:15:46.715 "data_size": 63488 00:15:46.715 } 00:15:46.715 ] 00:15:46.715 }' 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.715 07:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.283 [2024-11-20 07:12:44.344071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:47.283 [2024-11-20 07:12:44.344298] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.283 [2024-11-20 07:12:44.344373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:47.283 [2024-11-20 07:12:44.344498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.283 [2024-11-20 07:12:44.345101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.283 [2024-11-20 07:12:44.345142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:47.283 [2024-11-20 07:12:44.345253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:47.283 [2024-11-20 07:12:44.345297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:47.283 pt2 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.283 [2024-11-20 07:12:44.352042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.283 "name": "raid_bdev1", 00:15:47.283 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:47.283 "strip_size_kb": 0, 00:15:47.283 "state": "configuring", 00:15:47.283 "raid_level": "raid1", 00:15:47.283 "superblock": true, 00:15:47.283 "num_base_bdevs": 4, 00:15:47.283 "num_base_bdevs_discovered": 1, 00:15:47.283 "num_base_bdevs_operational": 4, 00:15:47.283 "base_bdevs_list": [ 00:15:47.283 { 00:15:47.283 "name": "pt1", 00:15:47.283 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.283 "is_configured": true, 00:15:47.283 "data_offset": 2048, 00:15:47.283 "data_size": 63488 00:15:47.283 }, 00:15:47.283 { 00:15:47.283 "name": null, 00:15:47.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.283 "is_configured": false, 00:15:47.283 "data_offset": 0, 00:15:47.283 "data_size": 63488 00:15:47.283 }, 00:15:47.283 { 00:15:47.283 "name": null, 00:15:47.283 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.283 "is_configured": false, 00:15:47.283 "data_offset": 2048, 00:15:47.283 "data_size": 63488 00:15:47.283 }, 00:15:47.283 { 00:15:47.283 "name": null, 00:15:47.283 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:47.283 "is_configured": false, 00:15:47.283 "data_offset": 2048, 00:15:47.283 "data_size": 63488 00:15:47.283 } 00:15:47.283 ] 00:15:47.283 }' 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.283 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.542 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:47.542 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:47.542 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:47.542 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.542 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.801 [2024-11-20 07:12:44.864202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:47.801 [2024-11-20 07:12:44.864456] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.801 [2024-11-20 07:12:44.864549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:47.801 [2024-11-20 07:12:44.864785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.801 [2024-11-20 07:12:44.865398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.801 [2024-11-20 07:12:44.865451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:47.801 [2024-11-20 07:12:44.865568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:47.801 [2024-11-20 07:12:44.865600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:47.801 pt2 00:15:47.801 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.801 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:47.801 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:47.801 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:47.801 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.801 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.801 [2024-11-20 07:12:44.876165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:47.801 [2024-11-20 07:12:44.876345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.801 [2024-11-20 07:12:44.876416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:47.801 [2024-11-20 07:12:44.876587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.801 [2024-11-20 07:12:44.877065] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.801 [2024-11-20 07:12:44.877117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:47.801 [2024-11-20 07:12:44.877201] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:47.801 [2024-11-20 07:12:44.877241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:47.801 pt3 00:15:47.801 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.801 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:47.801 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:47.801 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.802 [2024-11-20 07:12:44.884150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:47.802 [2024-11-20 07:12:44.884320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.802 [2024-11-20 07:12:44.884452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:47.802 [2024-11-20 07:12:44.884562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.802 [2024-11-20 07:12:44.885073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.802 [2024-11-20 07:12:44.885212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:47.802 [2024-11-20 07:12:44.885309] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:47.802 [2024-11-20 07:12:44.885339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:47.802 [2024-11-20 07:12:44.885529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:47.802 [2024-11-20 07:12:44.885545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:47.802 [2024-11-20 07:12:44.885881] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:47.802 [2024-11-20 07:12:44.886073] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:47.802 [2024-11-20 07:12:44.886093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:47.802 [2024-11-20 07:12:44.886263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.802 pt4 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.802 "name": "raid_bdev1", 00:15:47.802 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:47.802 "strip_size_kb": 0, 00:15:47.802 "state": "online", 00:15:47.802 "raid_level": "raid1", 00:15:47.802 "superblock": true, 00:15:47.802 "num_base_bdevs": 4, 00:15:47.802 "num_base_bdevs_discovered": 4, 00:15:47.802 "num_base_bdevs_operational": 4, 00:15:47.802 "base_bdevs_list": [ 00:15:47.802 { 00:15:47.802 "name": "pt1", 00:15:47.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.802 "is_configured": true, 00:15:47.802 "data_offset": 2048, 00:15:47.802 "data_size": 63488 00:15:47.802 }, 00:15:47.802 { 00:15:47.802 "name": "pt2", 00:15:47.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.802 "is_configured": true, 00:15:47.802 "data_offset": 2048, 00:15:47.802 "data_size": 63488 00:15:47.802 }, 00:15:47.802 { 00:15:47.802 "name": "pt3", 00:15:47.802 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.802 "is_configured": true, 00:15:47.802 "data_offset": 2048, 00:15:47.802 "data_size": 63488 00:15:47.802 }, 00:15:47.802 { 00:15:47.802 "name": "pt4", 00:15:47.802 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:47.802 "is_configured": true, 00:15:47.802 "data_offset": 2048, 00:15:47.802 "data_size": 63488 00:15:47.802 } 00:15:47.802 ] 00:15:47.802 }' 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.802 07:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.369 [2024-11-20 07:12:45.412742] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:48.369 "name": "raid_bdev1", 00:15:48.369 "aliases": [ 00:15:48.369 "3e00bcb8-8808-41f4-9b2d-f150ece3de7d" 00:15:48.369 ], 00:15:48.369 "product_name": "Raid Volume", 00:15:48.369 "block_size": 512, 00:15:48.369 "num_blocks": 63488, 00:15:48.369 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:48.369 "assigned_rate_limits": { 00:15:48.369 "rw_ios_per_sec": 0, 00:15:48.369 "rw_mbytes_per_sec": 0, 00:15:48.369 "r_mbytes_per_sec": 0, 00:15:48.369 "w_mbytes_per_sec": 0 00:15:48.369 }, 00:15:48.369 "claimed": false, 00:15:48.369 "zoned": false, 00:15:48.369 "supported_io_types": { 00:15:48.369 "read": true, 00:15:48.369 "write": true, 00:15:48.369 "unmap": false, 00:15:48.369 "flush": false, 00:15:48.369 "reset": true, 00:15:48.369 "nvme_admin": false, 00:15:48.369 "nvme_io": false, 00:15:48.369 "nvme_io_md": false, 00:15:48.369 "write_zeroes": true, 00:15:48.369 "zcopy": false, 00:15:48.369 "get_zone_info": false, 00:15:48.369 "zone_management": false, 00:15:48.369 "zone_append": false, 00:15:48.369 "compare": false, 00:15:48.369 "compare_and_write": false, 00:15:48.369 "abort": false, 00:15:48.369 "seek_hole": false, 00:15:48.369 "seek_data": false, 00:15:48.369 "copy": false, 00:15:48.369 "nvme_iov_md": false 00:15:48.369 }, 00:15:48.369 "memory_domains": [ 00:15:48.369 { 00:15:48.369 "dma_device_id": "system", 00:15:48.369 "dma_device_type": 1 00:15:48.369 }, 00:15:48.369 { 00:15:48.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.369 "dma_device_type": 2 00:15:48.369 }, 00:15:48.369 { 00:15:48.369 "dma_device_id": "system", 00:15:48.369 "dma_device_type": 1 00:15:48.369 }, 00:15:48.369 { 00:15:48.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.369 "dma_device_type": 2 00:15:48.369 }, 00:15:48.369 { 00:15:48.369 "dma_device_id": "system", 00:15:48.369 "dma_device_type": 1 00:15:48.369 }, 00:15:48.369 { 00:15:48.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.369 "dma_device_type": 2 00:15:48.369 }, 00:15:48.369 { 00:15:48.369 "dma_device_id": "system", 00:15:48.369 "dma_device_type": 1 00:15:48.369 }, 00:15:48.369 { 00:15:48.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.369 "dma_device_type": 2 00:15:48.369 } 00:15:48.369 ], 00:15:48.369 "driver_specific": { 00:15:48.369 "raid": { 00:15:48.369 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:48.369 "strip_size_kb": 0, 00:15:48.369 "state": "online", 00:15:48.369 "raid_level": "raid1", 00:15:48.369 "superblock": true, 00:15:48.369 "num_base_bdevs": 4, 00:15:48.369 "num_base_bdevs_discovered": 4, 00:15:48.369 "num_base_bdevs_operational": 4, 00:15:48.369 "base_bdevs_list": [ 00:15:48.369 { 00:15:48.369 "name": "pt1", 00:15:48.369 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.369 "is_configured": true, 00:15:48.369 "data_offset": 2048, 00:15:48.369 "data_size": 63488 00:15:48.369 }, 00:15:48.369 { 00:15:48.369 "name": "pt2", 00:15:48.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.369 "is_configured": true, 00:15:48.369 "data_offset": 2048, 00:15:48.369 "data_size": 63488 00:15:48.369 }, 00:15:48.369 { 00:15:48.369 "name": "pt3", 00:15:48.369 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.369 "is_configured": true, 00:15:48.369 "data_offset": 2048, 00:15:48.369 "data_size": 63488 00:15:48.369 }, 00:15:48.369 { 00:15:48.369 "name": "pt4", 00:15:48.369 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:48.369 "is_configured": true, 00:15:48.369 "data_offset": 2048, 00:15:48.369 "data_size": 63488 00:15:48.369 } 00:15:48.369 ] 00:15:48.369 } 00:15:48.369 } 00:15:48.369 }' 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:48.369 pt2 00:15:48.369 pt3 00:15:48.369 pt4' 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.369 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.628 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:48.629 [2024-11-20 07:12:45.784849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3e00bcb8-8808-41f4-9b2d-f150ece3de7d '!=' 3e00bcb8-8808-41f4-9b2d-f150ece3de7d ']' 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.629 [2024-11-20 07:12:45.852483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.629 "name": "raid_bdev1", 00:15:48.629 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:48.629 "strip_size_kb": 0, 00:15:48.629 "state": "online", 00:15:48.629 "raid_level": "raid1", 00:15:48.629 "superblock": true, 00:15:48.629 "num_base_bdevs": 4, 00:15:48.629 "num_base_bdevs_discovered": 3, 00:15:48.629 "num_base_bdevs_operational": 3, 00:15:48.629 "base_bdevs_list": [ 00:15:48.629 { 00:15:48.629 "name": null, 00:15:48.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.629 "is_configured": false, 00:15:48.629 "data_offset": 0, 00:15:48.629 "data_size": 63488 00:15:48.629 }, 00:15:48.629 { 00:15:48.629 "name": "pt2", 00:15:48.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.629 "is_configured": true, 00:15:48.629 "data_offset": 2048, 00:15:48.629 "data_size": 63488 00:15:48.629 }, 00:15:48.629 { 00:15:48.629 "name": "pt3", 00:15:48.629 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.629 "is_configured": true, 00:15:48.629 "data_offset": 2048, 00:15:48.629 "data_size": 63488 00:15:48.629 }, 00:15:48.629 { 00:15:48.629 "name": "pt4", 00:15:48.629 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:48.629 "is_configured": true, 00:15:48.629 "data_offset": 2048, 00:15:48.629 "data_size": 63488 00:15:48.629 } 00:15:48.629 ] 00:15:48.629 }' 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.629 07:12:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.196 [2024-11-20 07:12:46.364643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.196 [2024-11-20 07:12:46.364808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.196 [2024-11-20 07:12:46.364941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.196 [2024-11-20 07:12:46.365049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.196 [2024-11-20 07:12:46.365067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.196 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.197 [2024-11-20 07:12:46.456634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.197 [2024-11-20 07:12:46.456819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.197 [2024-11-20 07:12:46.456911] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:49.197 [2024-11-20 07:12:46.457091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.197 [2024-11-20 07:12:46.460021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.197 [2024-11-20 07:12:46.460184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.197 [2024-11-20 07:12:46.460396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:49.197 [2024-11-20 07:12:46.460556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.197 pt2 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.197 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.455 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.455 "name": "raid_bdev1", 00:15:49.455 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:49.455 "strip_size_kb": 0, 00:15:49.455 "state": "configuring", 00:15:49.455 "raid_level": "raid1", 00:15:49.455 "superblock": true, 00:15:49.455 "num_base_bdevs": 4, 00:15:49.455 "num_base_bdevs_discovered": 1, 00:15:49.455 "num_base_bdevs_operational": 3, 00:15:49.455 "base_bdevs_list": [ 00:15:49.455 { 00:15:49.455 "name": null, 00:15:49.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.455 "is_configured": false, 00:15:49.455 "data_offset": 2048, 00:15:49.455 "data_size": 63488 00:15:49.455 }, 00:15:49.455 { 00:15:49.455 "name": "pt2", 00:15:49.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.455 "is_configured": true, 00:15:49.455 "data_offset": 2048, 00:15:49.455 "data_size": 63488 00:15:49.455 }, 00:15:49.455 { 00:15:49.455 "name": null, 00:15:49.455 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.455 "is_configured": false, 00:15:49.455 "data_offset": 2048, 00:15:49.455 "data_size": 63488 00:15:49.455 }, 00:15:49.455 { 00:15:49.455 "name": null, 00:15:49.455 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:49.455 "is_configured": false, 00:15:49.455 "data_offset": 2048, 00:15:49.455 "data_size": 63488 00:15:49.455 } 00:15:49.455 ] 00:15:49.455 }' 00:15:49.455 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.455 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.713 [2024-11-20 07:12:46.973004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:49.713 [2024-11-20 07:12:46.973206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.713 [2024-11-20 07:12:46.973286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:49.713 [2024-11-20 07:12:46.973441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.713 [2024-11-20 07:12:46.974060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.713 [2024-11-20 07:12:46.974087] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:49.713 [2024-11-20 07:12:46.974194] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:49.713 [2024-11-20 07:12:46.974226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:49.713 pt3 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.713 07:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.972 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.972 "name": "raid_bdev1", 00:15:49.972 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:49.972 "strip_size_kb": 0, 00:15:49.972 "state": "configuring", 00:15:49.972 "raid_level": "raid1", 00:15:49.972 "superblock": true, 00:15:49.972 "num_base_bdevs": 4, 00:15:49.972 "num_base_bdevs_discovered": 2, 00:15:49.972 "num_base_bdevs_operational": 3, 00:15:49.972 "base_bdevs_list": [ 00:15:49.972 { 00:15:49.972 "name": null, 00:15:49.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.972 "is_configured": false, 00:15:49.972 "data_offset": 2048, 00:15:49.972 "data_size": 63488 00:15:49.972 }, 00:15:49.972 { 00:15:49.972 "name": "pt2", 00:15:49.972 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.972 "is_configured": true, 00:15:49.972 "data_offset": 2048, 00:15:49.972 "data_size": 63488 00:15:49.972 }, 00:15:49.972 { 00:15:49.972 "name": "pt3", 00:15:49.972 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.972 "is_configured": true, 00:15:49.972 "data_offset": 2048, 00:15:49.972 "data_size": 63488 00:15:49.972 }, 00:15:49.972 { 00:15:49.972 "name": null, 00:15:49.972 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:49.972 "is_configured": false, 00:15:49.972 "data_offset": 2048, 00:15:49.972 "data_size": 63488 00:15:49.972 } 00:15:49.972 ] 00:15:49.972 }' 00:15:49.972 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.972 07:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.230 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:50.230 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:50.230 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:50.230 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:50.230 07:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.230 07:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.230 [2024-11-20 07:12:47.509153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:50.230 [2024-11-20 07:12:47.509357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.230 [2024-11-20 07:12:47.509437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:50.230 [2024-11-20 07:12:47.509599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.230 [2024-11-20 07:12:47.510222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.230 [2024-11-20 07:12:47.510379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:50.230 [2024-11-20 07:12:47.510596] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:50.230 [2024-11-20 07:12:47.510645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:50.230 [2024-11-20 07:12:47.510822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:50.230 [2024-11-20 07:12:47.510838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:50.230 [2024-11-20 07:12:47.511166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:50.230 [2024-11-20 07:12:47.511353] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:50.231 [2024-11-20 07:12:47.511374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:50.231 [2024-11-20 07:12:47.511539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.231 pt4 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.231 07:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.489 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.489 "name": "raid_bdev1", 00:15:50.489 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:50.489 "strip_size_kb": 0, 00:15:50.489 "state": "online", 00:15:50.489 "raid_level": "raid1", 00:15:50.489 "superblock": true, 00:15:50.489 "num_base_bdevs": 4, 00:15:50.489 "num_base_bdevs_discovered": 3, 00:15:50.489 "num_base_bdevs_operational": 3, 00:15:50.489 "base_bdevs_list": [ 00:15:50.489 { 00:15:50.489 "name": null, 00:15:50.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.489 "is_configured": false, 00:15:50.489 "data_offset": 2048, 00:15:50.489 "data_size": 63488 00:15:50.489 }, 00:15:50.489 { 00:15:50.489 "name": "pt2", 00:15:50.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.489 "is_configured": true, 00:15:50.489 "data_offset": 2048, 00:15:50.489 "data_size": 63488 00:15:50.489 }, 00:15:50.489 { 00:15:50.489 "name": "pt3", 00:15:50.489 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.489 "is_configured": true, 00:15:50.489 "data_offset": 2048, 00:15:50.489 "data_size": 63488 00:15:50.489 }, 00:15:50.489 { 00:15:50.489 "name": "pt4", 00:15:50.489 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.489 "is_configured": true, 00:15:50.489 "data_offset": 2048, 00:15:50.489 "data_size": 63488 00:15:50.489 } 00:15:50.489 ] 00:15:50.489 }' 00:15:50.489 07:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.489 07:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.748 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.748 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.748 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.748 [2024-11-20 07:12:48.029235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.748 [2024-11-20 07:12:48.029393] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.748 [2024-11-20 07:12:48.029513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.748 [2024-11-20 07:12:48.029613] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.748 [2024-11-20 07:12:48.029635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:50.748 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.748 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.748 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:50.748 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.748 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.748 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.007 [2024-11-20 07:12:48.097245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.007 [2024-11-20 07:12:48.097481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.007 [2024-11-20 07:12:48.097516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:51.007 [2024-11-20 07:12:48.097537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.007 [2024-11-20 07:12:48.100428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.007 pt1 00:15:51.007 [2024-11-20 07:12:48.100621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.007 [2024-11-20 07:12:48.100737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:51.007 [2024-11-20 07:12:48.100800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:51.007 [2024-11-20 07:12:48.100989] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:51.007 [2024-11-20 07:12:48.101014] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.007 [2024-11-20 07:12:48.101034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:51.007 [2024-11-20 07:12:48.101118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.007 [2024-11-20 07:12:48.101260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.007 "name": "raid_bdev1", 00:15:51.007 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:51.007 "strip_size_kb": 0, 00:15:51.007 "state": "configuring", 00:15:51.007 "raid_level": "raid1", 00:15:51.007 "superblock": true, 00:15:51.007 "num_base_bdevs": 4, 00:15:51.007 "num_base_bdevs_discovered": 2, 00:15:51.007 "num_base_bdevs_operational": 3, 00:15:51.007 "base_bdevs_list": [ 00:15:51.007 { 00:15:51.007 "name": null, 00:15:51.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.007 "is_configured": false, 00:15:51.007 "data_offset": 2048, 00:15:51.007 "data_size": 63488 00:15:51.007 }, 00:15:51.007 { 00:15:51.007 "name": "pt2", 00:15:51.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.007 "is_configured": true, 00:15:51.007 "data_offset": 2048, 00:15:51.007 "data_size": 63488 00:15:51.007 }, 00:15:51.007 { 00:15:51.007 "name": "pt3", 00:15:51.007 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.007 "is_configured": true, 00:15:51.007 "data_offset": 2048, 00:15:51.007 "data_size": 63488 00:15:51.007 }, 00:15:51.007 { 00:15:51.007 "name": null, 00:15:51.007 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.007 "is_configured": false, 00:15:51.007 "data_offset": 2048, 00:15:51.007 "data_size": 63488 00:15:51.007 } 00:15:51.007 ] 00:15:51.007 }' 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.007 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.575 [2024-11-20 07:12:48.661519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:51.575 [2024-11-20 07:12:48.661752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.575 [2024-11-20 07:12:48.661830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:51.575 [2024-11-20 07:12:48.662049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.575 [2024-11-20 07:12:48.662600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.575 [2024-11-20 07:12:48.662633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:51.575 [2024-11-20 07:12:48.662737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:51.575 [2024-11-20 07:12:48.662776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:51.575 [2024-11-20 07:12:48.662957] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:51.575 [2024-11-20 07:12:48.662980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:51.575 [2024-11-20 07:12:48.663296] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:51.575 [2024-11-20 07:12:48.663479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:51.575 [2024-11-20 07:12:48.663499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:51.575 [2024-11-20 07:12:48.663675] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.575 pt4 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.575 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.575 "name": "raid_bdev1", 00:15:51.575 "uuid": "3e00bcb8-8808-41f4-9b2d-f150ece3de7d", 00:15:51.575 "strip_size_kb": 0, 00:15:51.575 "state": "online", 00:15:51.575 "raid_level": "raid1", 00:15:51.575 "superblock": true, 00:15:51.575 "num_base_bdevs": 4, 00:15:51.575 "num_base_bdevs_discovered": 3, 00:15:51.575 "num_base_bdevs_operational": 3, 00:15:51.575 "base_bdevs_list": [ 00:15:51.575 { 00:15:51.575 "name": null, 00:15:51.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.575 "is_configured": false, 00:15:51.575 "data_offset": 2048, 00:15:51.575 "data_size": 63488 00:15:51.575 }, 00:15:51.575 { 00:15:51.575 "name": "pt2", 00:15:51.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.576 "is_configured": true, 00:15:51.576 "data_offset": 2048, 00:15:51.576 "data_size": 63488 00:15:51.576 }, 00:15:51.576 { 00:15:51.576 "name": "pt3", 00:15:51.576 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.576 "is_configured": true, 00:15:51.576 "data_offset": 2048, 00:15:51.576 "data_size": 63488 00:15:51.576 }, 00:15:51.576 { 00:15:51.576 "name": "pt4", 00:15:51.576 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.576 "is_configured": true, 00:15:51.576 "data_offset": 2048, 00:15:51.576 "data_size": 63488 00:15:51.576 } 00:15:51.576 ] 00:15:51.576 }' 00:15:51.576 07:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.576 07:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.141 07:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:52.141 07:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:52.141 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.141 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.141 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.141 07:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:52.141 07:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.141 07:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:52.141 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.142 [2024-11-20 07:12:49.298091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3e00bcb8-8808-41f4-9b2d-f150ece3de7d '!=' 3e00bcb8-8808-41f4-9b2d-f150ece3de7d ']' 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74605 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74605 ']' 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74605 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74605 00:15:52.142 killing process with pid 74605 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74605' 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74605 00:15:52.142 [2024-11-20 07:12:49.375727] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.142 07:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74605 00:15:52.142 [2024-11-20 07:12:49.375831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.142 [2024-11-20 07:12:49.375965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.142 [2024-11-20 07:12:49.375989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:52.708 [2024-11-20 07:12:49.730677] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:53.642 07:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:53.642 00:15:53.642 real 0m9.367s 00:15:53.642 user 0m15.429s 00:15:53.642 sys 0m1.338s 00:15:53.642 07:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.642 07:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.642 ************************************ 00:15:53.642 END TEST raid_superblock_test 00:15:53.642 ************************************ 00:15:53.642 07:12:50 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:15:53.642 07:12:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:53.642 07:12:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.642 07:12:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:53.642 ************************************ 00:15:53.642 START TEST raid_read_error_test 00:15:53.642 ************************************ 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6iRjUklI5U 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75103 00:15:53.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75103 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75103 ']' 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:53.642 07:12:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.642 [2024-11-20 07:12:50.932979] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:15:53.642 [2024-11-20 07:12:50.933895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75103 ] 00:15:53.900 [2024-11-20 07:12:51.119170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.159 [2024-11-20 07:12:51.246717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.159 [2024-11-20 07:12:51.451930] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.159 [2024-11-20 07:12:51.452156] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.724 BaseBdev1_malloc 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.724 true 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.724 [2024-11-20 07:12:51.928355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:54.724 [2024-11-20 07:12:51.928582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.724 [2024-11-20 07:12:51.928620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:54.724 [2024-11-20 07:12:51.928639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.724 [2024-11-20 07:12:51.931442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.724 [2024-11-20 07:12:51.931491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:54.724 BaseBdev1 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.724 BaseBdev2_malloc 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.724 true 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.724 [2024-11-20 07:12:51.983889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:54.724 [2024-11-20 07:12:51.983954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.724 [2024-11-20 07:12:51.983978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:54.724 [2024-11-20 07:12:51.983994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.724 [2024-11-20 07:12:51.986802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.724 [2024-11-20 07:12:51.986848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:54.724 BaseBdev2 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.724 07:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.983 BaseBdev3_malloc 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.983 true 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.983 [2024-11-20 07:12:52.057604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:54.983 [2024-11-20 07:12:52.057680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.983 [2024-11-20 07:12:52.057704] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:54.983 [2024-11-20 07:12:52.057720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.983 [2024-11-20 07:12:52.060578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.983 [2024-11-20 07:12:52.060792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:54.983 BaseBdev3 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.983 BaseBdev4_malloc 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.983 true 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.983 [2024-11-20 07:12:52.118804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:54.983 [2024-11-20 07:12:52.118899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.983 [2024-11-20 07:12:52.118927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:54.983 [2024-11-20 07:12:52.118944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.983 [2024-11-20 07:12:52.121626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.983 [2024-11-20 07:12:52.121679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:54.983 BaseBdev4 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.983 [2024-11-20 07:12:52.126897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:54.983 [2024-11-20 07:12:52.129311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.983 [2024-11-20 07:12:52.129421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.983 [2024-11-20 07:12:52.129520] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:54.983 [2024-11-20 07:12:52.129836] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:54.983 [2024-11-20 07:12:52.129860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:54.983 [2024-11-20 07:12:52.130183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:54.983 [2024-11-20 07:12:52.130394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:54.983 [2024-11-20 07:12:52.130410] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:54.983 [2024-11-20 07:12:52.130595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.983 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.983 "name": "raid_bdev1", 00:15:54.983 "uuid": "15a369a6-6337-4189-a0e3-975c52c7394b", 00:15:54.983 "strip_size_kb": 0, 00:15:54.983 "state": "online", 00:15:54.983 "raid_level": "raid1", 00:15:54.983 "superblock": true, 00:15:54.983 "num_base_bdevs": 4, 00:15:54.983 "num_base_bdevs_discovered": 4, 00:15:54.983 "num_base_bdevs_operational": 4, 00:15:54.983 "base_bdevs_list": [ 00:15:54.983 { 00:15:54.983 "name": "BaseBdev1", 00:15:54.983 "uuid": "95c04ef9-afe5-5add-8d9a-fb34d1741ad2", 00:15:54.983 "is_configured": true, 00:15:54.983 "data_offset": 2048, 00:15:54.983 "data_size": 63488 00:15:54.983 }, 00:15:54.983 { 00:15:54.983 "name": "BaseBdev2", 00:15:54.983 "uuid": "ba495698-9142-533a-87fd-f2ecac9b330a", 00:15:54.983 "is_configured": true, 00:15:54.983 "data_offset": 2048, 00:15:54.983 "data_size": 63488 00:15:54.983 }, 00:15:54.983 { 00:15:54.983 "name": "BaseBdev3", 00:15:54.983 "uuid": "3708afaf-4a82-5854-ba60-61b78a060a06", 00:15:54.983 "is_configured": true, 00:15:54.983 "data_offset": 2048, 00:15:54.983 "data_size": 63488 00:15:54.983 }, 00:15:54.983 { 00:15:54.983 "name": "BaseBdev4", 00:15:54.983 "uuid": "f4295562-7eb9-530d-be35-5e4e6dd5fda7", 00:15:54.983 "is_configured": true, 00:15:54.984 "data_offset": 2048, 00:15:54.984 "data_size": 63488 00:15:54.984 } 00:15:54.984 ] 00:15:54.984 }' 00:15:54.984 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.984 07:12:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.549 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:55.549 07:12:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:15:55.549 [2024-11-20 07:12:52.736520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.483 "name": "raid_bdev1", 00:15:56.483 "uuid": "15a369a6-6337-4189-a0e3-975c52c7394b", 00:15:56.483 "strip_size_kb": 0, 00:15:56.483 "state": "online", 00:15:56.483 "raid_level": "raid1", 00:15:56.483 "superblock": true, 00:15:56.483 "num_base_bdevs": 4, 00:15:56.483 "num_base_bdevs_discovered": 4, 00:15:56.483 "num_base_bdevs_operational": 4, 00:15:56.483 "base_bdevs_list": [ 00:15:56.483 { 00:15:56.483 "name": "BaseBdev1", 00:15:56.483 "uuid": "95c04ef9-afe5-5add-8d9a-fb34d1741ad2", 00:15:56.483 "is_configured": true, 00:15:56.483 "data_offset": 2048, 00:15:56.483 "data_size": 63488 00:15:56.483 }, 00:15:56.483 { 00:15:56.483 "name": "BaseBdev2", 00:15:56.483 "uuid": "ba495698-9142-533a-87fd-f2ecac9b330a", 00:15:56.483 "is_configured": true, 00:15:56.483 "data_offset": 2048, 00:15:56.483 "data_size": 63488 00:15:56.483 }, 00:15:56.483 { 00:15:56.483 "name": "BaseBdev3", 00:15:56.483 "uuid": "3708afaf-4a82-5854-ba60-61b78a060a06", 00:15:56.483 "is_configured": true, 00:15:56.483 "data_offset": 2048, 00:15:56.483 "data_size": 63488 00:15:56.483 }, 00:15:56.483 { 00:15:56.483 "name": "BaseBdev4", 00:15:56.483 "uuid": "f4295562-7eb9-530d-be35-5e4e6dd5fda7", 00:15:56.483 "is_configured": true, 00:15:56.483 "data_offset": 2048, 00:15:56.483 "data_size": 63488 00:15:56.483 } 00:15:56.483 ] 00:15:56.483 }' 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.483 07:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.050 [2024-11-20 07:12:54.195249] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.050 [2024-11-20 07:12:54.195286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.050 [2024-11-20 07:12:54.198541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.050 [2024-11-20 07:12:54.198618] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.050 [2024-11-20 07:12:54.198772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.050 [2024-11-20 07:12:54.198792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:57.050 { 00:15:57.050 "results": [ 00:15:57.050 { 00:15:57.050 "job": "raid_bdev1", 00:15:57.050 "core_mask": "0x1", 00:15:57.050 "workload": "randrw", 00:15:57.050 "percentage": 50, 00:15:57.050 "status": "finished", 00:15:57.050 "queue_depth": 1, 00:15:57.050 "io_size": 131072, 00:15:57.050 "runtime": 1.456041, 00:15:57.050 "iops": 7668.053303444065, 00:15:57.050 "mibps": 958.5066629305081, 00:15:57.050 "io_failed": 0, 00:15:57.050 "io_timeout": 0, 00:15:57.050 "avg_latency_us": 126.18914627692055, 00:15:57.050 "min_latency_us": 40.02909090909091, 00:15:57.050 "max_latency_us": 2159.7090909090907 00:15:57.050 } 00:15:57.050 ], 00:15:57.050 "core_count": 1 00:15:57.050 } 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75103 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75103 ']' 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75103 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75103 00:15:57.050 killing process with pid 75103 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75103' 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75103 00:15:57.050 [2024-11-20 07:12:54.234827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.050 07:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75103 00:15:57.309 [2024-11-20 07:12:54.518957] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:58.708 07:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6iRjUklI5U 00:15:58.708 07:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:15:58.708 07:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:15:58.708 07:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:15:58.708 07:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:15:58.708 07:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:58.708 07:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:58.708 07:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:15:58.708 ************************************ 00:15:58.708 END TEST raid_read_error_test 00:15:58.708 ************************************ 00:15:58.708 00:15:58.708 real 0m4.798s 00:15:58.708 user 0m5.882s 00:15:58.708 sys 0m0.588s 00:15:58.708 07:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:58.708 07:12:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.708 07:12:55 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:15:58.708 07:12:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:58.708 07:12:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:58.708 07:12:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:58.708 ************************************ 00:15:58.708 START TEST raid_write_error_test 00:15:58.708 ************************************ 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ueOB6dxjX9 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75249 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75249 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75249 ']' 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.708 07:12:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.708 [2024-11-20 07:12:55.763254] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:15:58.708 [2024-11-20 07:12:55.763418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75249 ] 00:15:58.708 [2024-11-20 07:12:55.955701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.967 [2024-11-20 07:12:56.115529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.225 [2024-11-20 07:12:56.317333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.225 [2024-11-20 07:12:56.317375] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.484 BaseBdev1_malloc 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.484 true 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.484 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.484 [2024-11-20 07:12:56.794365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:59.484 [2024-11-20 07:12:56.794447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.484 [2024-11-20 07:12:56.794475] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:59.484 [2024-11-20 07:12:56.794499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.485 [2024-11-20 07:12:56.797575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.485 [2024-11-20 07:12:56.797625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:59.485 BaseBdev1 00:15:59.485 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.485 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:59.485 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:59.485 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.485 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.743 BaseBdev2_malloc 00:15:59.743 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.744 true 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.744 [2024-11-20 07:12:56.855194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:59.744 [2024-11-20 07:12:56.855261] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.744 [2024-11-20 07:12:56.855286] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:59.744 [2024-11-20 07:12:56.855304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.744 [2024-11-20 07:12:56.858044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.744 [2024-11-20 07:12:56.858227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:59.744 BaseBdev2 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.744 BaseBdev3_malloc 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.744 true 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.744 [2024-11-20 07:12:56.929782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:59.744 [2024-11-20 07:12:56.929863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.744 [2024-11-20 07:12:56.929928] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:59.744 [2024-11-20 07:12:56.929949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.744 [2024-11-20 07:12:56.932825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.744 [2024-11-20 07:12:56.932904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:59.744 BaseBdev3 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.744 BaseBdev4_malloc 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.744 true 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.744 [2024-11-20 07:12:56.990626] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:59.744 [2024-11-20 07:12:56.990690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.744 [2024-11-20 07:12:56.990717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:59.744 [2024-11-20 07:12:56.990735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.744 [2024-11-20 07:12:56.993492] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.744 [2024-11-20 07:12:56.993543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:59.744 BaseBdev4 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.744 07:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.744 [2024-11-20 07:12:56.998703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:59.744 [2024-11-20 07:12:57.001256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:59.744 [2024-11-20 07:12:57.001365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:59.744 [2024-11-20 07:12:57.001466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:59.744 [2024-11-20 07:12:57.001757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:59.744 [2024-11-20 07:12:57.001780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:59.744 [2024-11-20 07:12:57.002153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:15:59.744 [2024-11-20 07:12:57.002385] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:59.744 [2024-11-20 07:12:57.002402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:59.744 [2024-11-20 07:12:57.002652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.744 "name": "raid_bdev1", 00:15:59.744 "uuid": "9a14749f-c6ab-4053-897e-5c787bf7f94b", 00:15:59.744 "strip_size_kb": 0, 00:15:59.744 "state": "online", 00:15:59.744 "raid_level": "raid1", 00:15:59.744 "superblock": true, 00:15:59.744 "num_base_bdevs": 4, 00:15:59.744 "num_base_bdevs_discovered": 4, 00:15:59.744 "num_base_bdevs_operational": 4, 00:15:59.744 "base_bdevs_list": [ 00:15:59.744 { 00:15:59.744 "name": "BaseBdev1", 00:15:59.744 "uuid": "764187a7-f5fb-590d-9b19-37e4ed35ec82", 00:15:59.744 "is_configured": true, 00:15:59.744 "data_offset": 2048, 00:15:59.744 "data_size": 63488 00:15:59.744 }, 00:15:59.744 { 00:15:59.744 "name": "BaseBdev2", 00:15:59.744 "uuid": "342877ca-aa04-5afe-b24a-a60ab3f75405", 00:15:59.744 "is_configured": true, 00:15:59.744 "data_offset": 2048, 00:15:59.744 "data_size": 63488 00:15:59.744 }, 00:15:59.744 { 00:15:59.744 "name": "BaseBdev3", 00:15:59.744 "uuid": "4fed67d2-d23a-5c2a-8e08-cf42ff806d60", 00:15:59.744 "is_configured": true, 00:15:59.744 "data_offset": 2048, 00:15:59.744 "data_size": 63488 00:15:59.744 }, 00:15:59.744 { 00:15:59.744 "name": "BaseBdev4", 00:15:59.744 "uuid": "75de31aa-832f-5eaa-866a-47ab40779e76", 00:15:59.744 "is_configured": true, 00:15:59.744 "data_offset": 2048, 00:15:59.744 "data_size": 63488 00:15:59.744 } 00:15:59.744 ] 00:15:59.744 }' 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.744 07:12:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.311 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:16:00.311 07:12:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:00.570 [2024-11-20 07:12:57.676354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.507 [2024-11-20 07:12:58.528848] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:01.507 [2024-11-20 07:12:58.528922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:01.507 [2024-11-20 07:12:58.529192] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.507 "name": "raid_bdev1", 00:16:01.507 "uuid": "9a14749f-c6ab-4053-897e-5c787bf7f94b", 00:16:01.507 "strip_size_kb": 0, 00:16:01.507 "state": "online", 00:16:01.507 "raid_level": "raid1", 00:16:01.507 "superblock": true, 00:16:01.507 "num_base_bdevs": 4, 00:16:01.507 "num_base_bdevs_discovered": 3, 00:16:01.507 "num_base_bdevs_operational": 3, 00:16:01.507 "base_bdevs_list": [ 00:16:01.507 { 00:16:01.507 "name": null, 00:16:01.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.507 "is_configured": false, 00:16:01.507 "data_offset": 0, 00:16:01.507 "data_size": 63488 00:16:01.507 }, 00:16:01.507 { 00:16:01.507 "name": "BaseBdev2", 00:16:01.507 "uuid": "342877ca-aa04-5afe-b24a-a60ab3f75405", 00:16:01.507 "is_configured": true, 00:16:01.507 "data_offset": 2048, 00:16:01.507 "data_size": 63488 00:16:01.507 }, 00:16:01.507 { 00:16:01.507 "name": "BaseBdev3", 00:16:01.507 "uuid": "4fed67d2-d23a-5c2a-8e08-cf42ff806d60", 00:16:01.507 "is_configured": true, 00:16:01.507 "data_offset": 2048, 00:16:01.507 "data_size": 63488 00:16:01.507 }, 00:16:01.507 { 00:16:01.507 "name": "BaseBdev4", 00:16:01.507 "uuid": "75de31aa-832f-5eaa-866a-47ab40779e76", 00:16:01.507 "is_configured": true, 00:16:01.507 "data_offset": 2048, 00:16:01.507 "data_size": 63488 00:16:01.507 } 00:16:01.507 ] 00:16:01.507 }' 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.507 07:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.766 07:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:01.766 07:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.766 07:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.766 [2024-11-20 07:12:59.084385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.023 [2024-11-20 07:12:59.084593] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.023 [2024-11-20 07:12:59.088243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.023 [2024-11-20 07:12:59.088507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.023 [2024-11-20 07:12:59.088776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.023 [2024-11-20 07:12:59.088963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:16:02.023 "results": [ 00:16:02.023 { 00:16:02.023 "job": "raid_bdev1", 00:16:02.023 "core_mask": "0x1", 00:16:02.023 "workload": "randrw", 00:16:02.023 "percentage": 50, 00:16:02.023 "status": "finished", 00:16:02.023 "queue_depth": 1, 00:16:02.023 "io_size": 131072, 00:16:02.023 "runtime": 1.405787, 00:16:02.023 "iops": 8462.163898229248, 00:16:02.023 "mibps": 1057.770487278656, 00:16:02.023 "io_failed": 0, 00:16:02.023 "io_timeout": 0, 00:16:02.023 "avg_latency_us": 113.9922186219967, 00:16:02.023 "min_latency_us": 40.49454545454545, 00:16:02.023 "max_latency_us": 1951.1854545454546 00:16:02.023 } 00:16:02.023 ], 00:16:02.023 "core_count": 1 00:16:02.023 } 00:16:02.023 te offline 00:16:02.023 07:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.023 07:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75249 00:16:02.023 07:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75249 ']' 00:16:02.023 07:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75249 00:16:02.023 07:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:16:02.023 07:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.023 07:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75249 00:16:02.023 07:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.023 07:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.023 07:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75249' 00:16:02.023 killing process with pid 75249 00:16:02.023 07:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75249 00:16:02.023 [2024-11-20 07:12:59.130578] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:02.023 07:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75249 00:16:02.281 [2024-11-20 07:12:59.419466] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.215 07:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ueOB6dxjX9 00:16:03.215 07:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:16:03.215 07:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:16:03.215 07:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:16:03.215 07:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:16:03.215 ************************************ 00:16:03.215 END TEST raid_write_error_test 00:16:03.215 ************************************ 00:16:03.215 07:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:03.215 07:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:03.215 07:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:03.215 00:16:03.215 real 0m4.858s 00:16:03.215 user 0m6.043s 00:16:03.215 sys 0m0.575s 00:16:03.215 07:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.215 07:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.475 07:13:00 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:16:03.475 07:13:00 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:16:03.475 07:13:00 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:16:03.475 07:13:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:03.475 07:13:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.475 07:13:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:03.475 ************************************ 00:16:03.475 START TEST raid_rebuild_test 00:16:03.475 ************************************ 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75387 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75387 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75387 ']' 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:03.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.475 07:13:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.475 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:03.475 Zero copy mechanism will not be used. 00:16:03.475 [2024-11-20 07:13:00.695009] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:16:03.475 [2024-11-20 07:13:00.695181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75387 ] 00:16:03.734 [2024-11-20 07:13:00.885136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.734 [2024-11-20 07:13:01.039986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.992 [2024-11-20 07:13:01.254494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.992 [2024-11-20 07:13:01.254572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:04.557 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:04.557 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:04.557 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.557 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:04.557 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.557 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.557 BaseBdev1_malloc 00:16:04.557 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.557 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:04.557 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.558 [2024-11-20 07:13:01.756705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:04.558 [2024-11-20 07:13:01.756798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.558 [2024-11-20 07:13:01.756833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:04.558 [2024-11-20 07:13:01.756852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.558 [2024-11-20 07:13:01.759657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.558 [2024-11-20 07:13:01.759707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:04.558 BaseBdev1 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.558 BaseBdev2_malloc 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.558 [2024-11-20 07:13:01.804650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:04.558 [2024-11-20 07:13:01.804748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.558 [2024-11-20 07:13:01.804774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:04.558 [2024-11-20 07:13:01.804793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.558 [2024-11-20 07:13:01.807581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.558 [2024-11-20 07:13:01.807629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:04.558 BaseBdev2 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.558 spare_malloc 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.558 spare_delay 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.558 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.815 [2024-11-20 07:13:01.877190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:04.815 [2024-11-20 07:13:01.877272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.815 [2024-11-20 07:13:01.877302] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:04.815 [2024-11-20 07:13:01.877320] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.815 [2024-11-20 07:13:01.880042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.815 [2024-11-20 07:13:01.880099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:04.815 spare 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.816 [2024-11-20 07:13:01.885260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.816 [2024-11-20 07:13:01.887587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.816 [2024-11-20 07:13:01.887715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:04.816 [2024-11-20 07:13:01.887743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:04.816 [2024-11-20 07:13:01.888080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:04.816 [2024-11-20 07:13:01.888293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:04.816 [2024-11-20 07:13:01.888311] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:04.816 [2024-11-20 07:13:01.888500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.816 "name": "raid_bdev1", 00:16:04.816 "uuid": "82b88eb5-4f19-4665-9f23-de1377030947", 00:16:04.816 "strip_size_kb": 0, 00:16:04.816 "state": "online", 00:16:04.816 "raid_level": "raid1", 00:16:04.816 "superblock": false, 00:16:04.816 "num_base_bdevs": 2, 00:16:04.816 "num_base_bdevs_discovered": 2, 00:16:04.816 "num_base_bdevs_operational": 2, 00:16:04.816 "base_bdevs_list": [ 00:16:04.816 { 00:16:04.816 "name": "BaseBdev1", 00:16:04.816 "uuid": "18bd4bc2-b6b4-5c5d-999e-aacba37811bd", 00:16:04.816 "is_configured": true, 00:16:04.816 "data_offset": 0, 00:16:04.816 "data_size": 65536 00:16:04.816 }, 00:16:04.816 { 00:16:04.816 "name": "BaseBdev2", 00:16:04.816 "uuid": "3c53beda-fe9a-5332-9153-925fe4b6317a", 00:16:04.816 "is_configured": true, 00:16:04.816 "data_offset": 0, 00:16:04.816 "data_size": 65536 00:16:04.816 } 00:16:04.816 ] 00:16:04.816 }' 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.816 07:13:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.073 07:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:05.073 07:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:05.073 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.073 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.073 [2024-11-20 07:13:02.385744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.331 07:13:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:05.589 [2024-11-20 07:13:02.749605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:05.589 /dev/nbd0 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.589 1+0 records in 00:16:05.589 1+0 records out 00:16:05.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382738 s, 10.7 MB/s 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:05.589 07:13:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:16:12.278 65536+0 records in 00:16:12.278 65536+0 records out 00:16:12.278 33554432 bytes (34 MB, 32 MiB) copied, 6.41594 s, 5.2 MB/s 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:12.278 [2024-11-20 07:13:09.561832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.278 [2024-11-20 07:13:09.573965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.278 07:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.537 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.537 "name": "raid_bdev1", 00:16:12.537 "uuid": "82b88eb5-4f19-4665-9f23-de1377030947", 00:16:12.537 "strip_size_kb": 0, 00:16:12.537 "state": "online", 00:16:12.538 "raid_level": "raid1", 00:16:12.538 "superblock": false, 00:16:12.538 "num_base_bdevs": 2, 00:16:12.538 "num_base_bdevs_discovered": 1, 00:16:12.538 "num_base_bdevs_operational": 1, 00:16:12.538 "base_bdevs_list": [ 00:16:12.538 { 00:16:12.538 "name": null, 00:16:12.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.538 "is_configured": false, 00:16:12.538 "data_offset": 0, 00:16:12.538 "data_size": 65536 00:16:12.538 }, 00:16:12.538 { 00:16:12.538 "name": "BaseBdev2", 00:16:12.538 "uuid": "3c53beda-fe9a-5332-9153-925fe4b6317a", 00:16:12.538 "is_configured": true, 00:16:12.538 "data_offset": 0, 00:16:12.538 "data_size": 65536 00:16:12.538 } 00:16:12.538 ] 00:16:12.538 }' 00:16:12.538 07:13:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.538 07:13:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.796 07:13:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.796 07:13:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.796 07:13:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.796 [2024-11-20 07:13:10.086167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.796 [2024-11-20 07:13:10.102775] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:16:12.796 07:13:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.796 07:13:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:12.796 [2024-11-20 07:13:10.105333] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.175 "name": "raid_bdev1", 00:16:14.175 "uuid": "82b88eb5-4f19-4665-9f23-de1377030947", 00:16:14.175 "strip_size_kb": 0, 00:16:14.175 "state": "online", 00:16:14.175 "raid_level": "raid1", 00:16:14.175 "superblock": false, 00:16:14.175 "num_base_bdevs": 2, 00:16:14.175 "num_base_bdevs_discovered": 2, 00:16:14.175 "num_base_bdevs_operational": 2, 00:16:14.175 "process": { 00:16:14.175 "type": "rebuild", 00:16:14.175 "target": "spare", 00:16:14.175 "progress": { 00:16:14.175 "blocks": 20480, 00:16:14.175 "percent": 31 00:16:14.175 } 00:16:14.175 }, 00:16:14.175 "base_bdevs_list": [ 00:16:14.175 { 00:16:14.175 "name": "spare", 00:16:14.175 "uuid": "ae8e9917-2cb0-5540-98b0-8d6185a949e1", 00:16:14.175 "is_configured": true, 00:16:14.175 "data_offset": 0, 00:16:14.175 "data_size": 65536 00:16:14.175 }, 00:16:14.175 { 00:16:14.175 "name": "BaseBdev2", 00:16:14.175 "uuid": "3c53beda-fe9a-5332-9153-925fe4b6317a", 00:16:14.175 "is_configured": true, 00:16:14.175 "data_offset": 0, 00:16:14.175 "data_size": 65536 00:16:14.175 } 00:16:14.175 ] 00:16:14.175 }' 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.175 [2024-11-20 07:13:11.274402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.175 [2024-11-20 07:13:11.314352] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:14.175 [2024-11-20 07:13:11.314468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.175 [2024-11-20 07:13:11.314492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:14.175 [2024-11-20 07:13:11.314506] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.175 "name": "raid_bdev1", 00:16:14.175 "uuid": "82b88eb5-4f19-4665-9f23-de1377030947", 00:16:14.175 "strip_size_kb": 0, 00:16:14.175 "state": "online", 00:16:14.175 "raid_level": "raid1", 00:16:14.175 "superblock": false, 00:16:14.175 "num_base_bdevs": 2, 00:16:14.175 "num_base_bdevs_discovered": 1, 00:16:14.175 "num_base_bdevs_operational": 1, 00:16:14.175 "base_bdevs_list": [ 00:16:14.175 { 00:16:14.175 "name": null, 00:16:14.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.175 "is_configured": false, 00:16:14.175 "data_offset": 0, 00:16:14.175 "data_size": 65536 00:16:14.175 }, 00:16:14.175 { 00:16:14.175 "name": "BaseBdev2", 00:16:14.175 "uuid": "3c53beda-fe9a-5332-9153-925fe4b6317a", 00:16:14.175 "is_configured": true, 00:16:14.175 "data_offset": 0, 00:16:14.175 "data_size": 65536 00:16:14.175 } 00:16:14.175 ] 00:16:14.175 }' 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.175 07:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.741 "name": "raid_bdev1", 00:16:14.741 "uuid": "82b88eb5-4f19-4665-9f23-de1377030947", 00:16:14.741 "strip_size_kb": 0, 00:16:14.741 "state": "online", 00:16:14.741 "raid_level": "raid1", 00:16:14.741 "superblock": false, 00:16:14.741 "num_base_bdevs": 2, 00:16:14.741 "num_base_bdevs_discovered": 1, 00:16:14.741 "num_base_bdevs_operational": 1, 00:16:14.741 "base_bdevs_list": [ 00:16:14.741 { 00:16:14.741 "name": null, 00:16:14.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.741 "is_configured": false, 00:16:14.741 "data_offset": 0, 00:16:14.741 "data_size": 65536 00:16:14.741 }, 00:16:14.741 { 00:16:14.741 "name": "BaseBdev2", 00:16:14.741 "uuid": "3c53beda-fe9a-5332-9153-925fe4b6317a", 00:16:14.741 "is_configured": true, 00:16:14.741 "data_offset": 0, 00:16:14.741 "data_size": 65536 00:16:14.741 } 00:16:14.741 ] 00:16:14.741 }' 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.741 07:13:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.741 07:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.741 07:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:14.741 07:13:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.741 07:13:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.741 [2024-11-20 07:13:12.010581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.741 [2024-11-20 07:13:12.026174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:16:14.741 07:13:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.741 07:13:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:14.741 [2024-11-20 07:13:12.028698] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.118 "name": "raid_bdev1", 00:16:16.118 "uuid": "82b88eb5-4f19-4665-9f23-de1377030947", 00:16:16.118 "strip_size_kb": 0, 00:16:16.118 "state": "online", 00:16:16.118 "raid_level": "raid1", 00:16:16.118 "superblock": false, 00:16:16.118 "num_base_bdevs": 2, 00:16:16.118 "num_base_bdevs_discovered": 2, 00:16:16.118 "num_base_bdevs_operational": 2, 00:16:16.118 "process": { 00:16:16.118 "type": "rebuild", 00:16:16.118 "target": "spare", 00:16:16.118 "progress": { 00:16:16.118 "blocks": 20480, 00:16:16.118 "percent": 31 00:16:16.118 } 00:16:16.118 }, 00:16:16.118 "base_bdevs_list": [ 00:16:16.118 { 00:16:16.118 "name": "spare", 00:16:16.118 "uuid": "ae8e9917-2cb0-5540-98b0-8d6185a949e1", 00:16:16.118 "is_configured": true, 00:16:16.118 "data_offset": 0, 00:16:16.118 "data_size": 65536 00:16:16.118 }, 00:16:16.118 { 00:16:16.118 "name": "BaseBdev2", 00:16:16.118 "uuid": "3c53beda-fe9a-5332-9153-925fe4b6317a", 00:16:16.118 "is_configured": true, 00:16:16.118 "data_offset": 0, 00:16:16.118 "data_size": 65536 00:16:16.118 } 00:16:16.118 ] 00:16:16.118 }' 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=398 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.118 "name": "raid_bdev1", 00:16:16.118 "uuid": "82b88eb5-4f19-4665-9f23-de1377030947", 00:16:16.118 "strip_size_kb": 0, 00:16:16.118 "state": "online", 00:16:16.118 "raid_level": "raid1", 00:16:16.118 "superblock": false, 00:16:16.118 "num_base_bdevs": 2, 00:16:16.118 "num_base_bdevs_discovered": 2, 00:16:16.118 "num_base_bdevs_operational": 2, 00:16:16.118 "process": { 00:16:16.118 "type": "rebuild", 00:16:16.118 "target": "spare", 00:16:16.118 "progress": { 00:16:16.118 "blocks": 22528, 00:16:16.118 "percent": 34 00:16:16.118 } 00:16:16.118 }, 00:16:16.118 "base_bdevs_list": [ 00:16:16.118 { 00:16:16.118 "name": "spare", 00:16:16.118 "uuid": "ae8e9917-2cb0-5540-98b0-8d6185a949e1", 00:16:16.118 "is_configured": true, 00:16:16.118 "data_offset": 0, 00:16:16.118 "data_size": 65536 00:16:16.118 }, 00:16:16.118 { 00:16:16.118 "name": "BaseBdev2", 00:16:16.118 "uuid": "3c53beda-fe9a-5332-9153-925fe4b6317a", 00:16:16.118 "is_configured": true, 00:16:16.118 "data_offset": 0, 00:16:16.118 "data_size": 65536 00:16:16.118 } 00:16:16.118 ] 00:16:16.118 }' 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.118 07:13:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.102 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.102 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.102 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.102 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.102 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.102 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.102 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.102 07:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.102 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.102 07:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.102 07:13:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.102 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.102 "name": "raid_bdev1", 00:16:17.102 "uuid": "82b88eb5-4f19-4665-9f23-de1377030947", 00:16:17.102 "strip_size_kb": 0, 00:16:17.102 "state": "online", 00:16:17.102 "raid_level": "raid1", 00:16:17.102 "superblock": false, 00:16:17.102 "num_base_bdevs": 2, 00:16:17.102 "num_base_bdevs_discovered": 2, 00:16:17.102 "num_base_bdevs_operational": 2, 00:16:17.102 "process": { 00:16:17.102 "type": "rebuild", 00:16:17.102 "target": "spare", 00:16:17.102 "progress": { 00:16:17.102 "blocks": 47104, 00:16:17.102 "percent": 71 00:16:17.102 } 00:16:17.102 }, 00:16:17.102 "base_bdevs_list": [ 00:16:17.102 { 00:16:17.102 "name": "spare", 00:16:17.102 "uuid": "ae8e9917-2cb0-5540-98b0-8d6185a949e1", 00:16:17.102 "is_configured": true, 00:16:17.102 "data_offset": 0, 00:16:17.102 "data_size": 65536 00:16:17.102 }, 00:16:17.102 { 00:16:17.102 "name": "BaseBdev2", 00:16:17.102 "uuid": "3c53beda-fe9a-5332-9153-925fe4b6317a", 00:16:17.102 "is_configured": true, 00:16:17.102 "data_offset": 0, 00:16:17.102 "data_size": 65536 00:16:17.102 } 00:16:17.102 ] 00:16:17.102 }' 00:16:17.102 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.362 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.362 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.362 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.362 07:13:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.297 [2024-11-20 07:13:15.251965] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:18.297 [2024-11-20 07:13:15.252101] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:18.297 [2024-11-20 07:13:15.252176] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.297 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.297 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.297 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.297 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.297 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.297 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.297 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.297 07:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.297 07:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.297 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.297 07:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.297 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.297 "name": "raid_bdev1", 00:16:18.297 "uuid": "82b88eb5-4f19-4665-9f23-de1377030947", 00:16:18.297 "strip_size_kb": 0, 00:16:18.297 "state": "online", 00:16:18.297 "raid_level": "raid1", 00:16:18.297 "superblock": false, 00:16:18.297 "num_base_bdevs": 2, 00:16:18.297 "num_base_bdevs_discovered": 2, 00:16:18.297 "num_base_bdevs_operational": 2, 00:16:18.297 "base_bdevs_list": [ 00:16:18.297 { 00:16:18.297 "name": "spare", 00:16:18.297 "uuid": "ae8e9917-2cb0-5540-98b0-8d6185a949e1", 00:16:18.297 "is_configured": true, 00:16:18.297 "data_offset": 0, 00:16:18.297 "data_size": 65536 00:16:18.297 }, 00:16:18.297 { 00:16:18.297 "name": "BaseBdev2", 00:16:18.297 "uuid": "3c53beda-fe9a-5332-9153-925fe4b6317a", 00:16:18.297 "is_configured": true, 00:16:18.297 "data_offset": 0, 00:16:18.297 "data_size": 65536 00:16:18.297 } 00:16:18.297 ] 00:16:18.297 }' 00:16:18.297 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.555 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:18.555 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.555 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:18.555 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:18.555 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.555 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.555 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.555 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.556 "name": "raid_bdev1", 00:16:18.556 "uuid": "82b88eb5-4f19-4665-9f23-de1377030947", 00:16:18.556 "strip_size_kb": 0, 00:16:18.556 "state": "online", 00:16:18.556 "raid_level": "raid1", 00:16:18.556 "superblock": false, 00:16:18.556 "num_base_bdevs": 2, 00:16:18.556 "num_base_bdevs_discovered": 2, 00:16:18.556 "num_base_bdevs_operational": 2, 00:16:18.556 "base_bdevs_list": [ 00:16:18.556 { 00:16:18.556 "name": "spare", 00:16:18.556 "uuid": "ae8e9917-2cb0-5540-98b0-8d6185a949e1", 00:16:18.556 "is_configured": true, 00:16:18.556 "data_offset": 0, 00:16:18.556 "data_size": 65536 00:16:18.556 }, 00:16:18.556 { 00:16:18.556 "name": "BaseBdev2", 00:16:18.556 "uuid": "3c53beda-fe9a-5332-9153-925fe4b6317a", 00:16:18.556 "is_configured": true, 00:16:18.556 "data_offset": 0, 00:16:18.556 "data_size": 65536 00:16:18.556 } 00:16:18.556 ] 00:16:18.556 }' 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.556 "name": "raid_bdev1", 00:16:18.556 "uuid": "82b88eb5-4f19-4665-9f23-de1377030947", 00:16:18.556 "strip_size_kb": 0, 00:16:18.556 "state": "online", 00:16:18.556 "raid_level": "raid1", 00:16:18.556 "superblock": false, 00:16:18.556 "num_base_bdevs": 2, 00:16:18.556 "num_base_bdevs_discovered": 2, 00:16:18.556 "num_base_bdevs_operational": 2, 00:16:18.556 "base_bdevs_list": [ 00:16:18.556 { 00:16:18.556 "name": "spare", 00:16:18.556 "uuid": "ae8e9917-2cb0-5540-98b0-8d6185a949e1", 00:16:18.556 "is_configured": true, 00:16:18.556 "data_offset": 0, 00:16:18.556 "data_size": 65536 00:16:18.556 }, 00:16:18.556 { 00:16:18.556 "name": "BaseBdev2", 00:16:18.556 "uuid": "3c53beda-fe9a-5332-9153-925fe4b6317a", 00:16:18.556 "is_configured": true, 00:16:18.556 "data_offset": 0, 00:16:18.556 "data_size": 65536 00:16:18.556 } 00:16:18.556 ] 00:16:18.556 }' 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.556 07:13:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.123 [2024-11-20 07:13:16.300097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.123 [2024-11-20 07:13:16.300286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.123 [2024-11-20 07:13:16.300407] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.123 [2024-11-20 07:13:16.300499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.123 [2024-11-20 07:13:16.300516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:19.123 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:19.382 /dev/nbd0 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.640 1+0 records in 00:16:19.640 1+0 records out 00:16:19.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656849 s, 6.2 MB/s 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:19.640 07:13:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:19.899 /dev/nbd1 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.899 1+0 records in 00:16:19.899 1+0 records out 00:16:19.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348636 s, 11.7 MB/s 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:19.899 07:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:20.159 07:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:20.159 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:20.159 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:20.159 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.159 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:20.159 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.159 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:20.417 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:20.418 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:20.418 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:20.418 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:20.418 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:20.418 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:20.418 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:20.418 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:20.418 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.418 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75387 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75387 ']' 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75387 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75387 00:16:20.677 killing process with pid 75387 00:16:20.677 Received shutdown signal, test time was about 60.000000 seconds 00:16:20.677 00:16:20.677 Latency(us) 00:16:20.677 [2024-11-20T07:13:17.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.677 [2024-11-20T07:13:17.997Z] =================================================================================================================== 00:16:20.677 [2024-11-20T07:13:17.997Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75387' 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75387 00:16:20.677 [2024-11-20 07:13:17.827282] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:20.677 07:13:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75387 00:16:20.936 [2024-11-20 07:13:18.096326] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:21.873 00:16:21.873 real 0m18.539s 00:16:21.873 user 0m21.279s 00:16:21.873 sys 0m3.575s 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.873 ************************************ 00:16:21.873 END TEST raid_rebuild_test 00:16:21.873 ************************************ 00:16:21.873 07:13:19 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:16:21.873 07:13:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:21.873 07:13:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.873 07:13:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.873 ************************************ 00:16:21.873 START TEST raid_rebuild_test_sb 00:16:21.873 ************************************ 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75844 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75844 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75844 ']' 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.873 07:13:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.132 [2024-11-20 07:13:19.274968] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:16:22.132 [2024-11-20 07:13:19.275341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75844 ] 00:16:22.132 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:22.132 Zero copy mechanism will not be used. 00:16:22.392 [2024-11-20 07:13:19.453998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.392 [2024-11-20 07:13:19.584538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.651 [2024-11-20 07:13:19.789064] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.651 [2024-11-20 07:13:19.789144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.220 BaseBdev1_malloc 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.220 [2024-11-20 07:13:20.352086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:23.220 [2024-11-20 07:13:20.352308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.220 [2024-11-20 07:13:20.352352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:23.220 [2024-11-20 07:13:20.352373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.220 [2024-11-20 07:13:20.355287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.220 [2024-11-20 07:13:20.355499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:23.220 BaseBdev1 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.220 BaseBdev2_malloc 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.220 [2024-11-20 07:13:20.404047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:23.220 [2024-11-20 07:13:20.404266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.220 [2024-11-20 07:13:20.404304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:23.220 [2024-11-20 07:13:20.404327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.220 [2024-11-20 07:13:20.407128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.220 [2024-11-20 07:13:20.407177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:23.220 BaseBdev2 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.220 spare_malloc 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.220 spare_delay 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.220 [2024-11-20 07:13:20.477639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:23.220 [2024-11-20 07:13:20.477718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.220 [2024-11-20 07:13:20.477748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:23.220 [2024-11-20 07:13:20.477767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.220 [2024-11-20 07:13:20.480558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.220 [2024-11-20 07:13:20.480609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:23.220 spare 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.220 [2024-11-20 07:13:20.485720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:23.220 [2024-11-20 07:13:20.488321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:23.220 [2024-11-20 07:13:20.488713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:23.220 [2024-11-20 07:13:20.488856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:23.220 [2024-11-20 07:13:20.489235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:23.220 [2024-11-20 07:13:20.489609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:23.220 [2024-11-20 07:13:20.489734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:23.220 [2024-11-20 07:13:20.490177] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.220 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.221 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.221 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.221 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.221 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.221 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.480 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.480 "name": "raid_bdev1", 00:16:23.480 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:23.480 "strip_size_kb": 0, 00:16:23.480 "state": "online", 00:16:23.480 "raid_level": "raid1", 00:16:23.480 "superblock": true, 00:16:23.480 "num_base_bdevs": 2, 00:16:23.480 "num_base_bdevs_discovered": 2, 00:16:23.480 "num_base_bdevs_operational": 2, 00:16:23.480 "base_bdevs_list": [ 00:16:23.480 { 00:16:23.480 "name": "BaseBdev1", 00:16:23.480 "uuid": "65370a93-593a-5f3d-afad-6e73767bbf52", 00:16:23.480 "is_configured": true, 00:16:23.480 "data_offset": 2048, 00:16:23.480 "data_size": 63488 00:16:23.480 }, 00:16:23.480 { 00:16:23.480 "name": "BaseBdev2", 00:16:23.480 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:23.480 "is_configured": true, 00:16:23.480 "data_offset": 2048, 00:16:23.480 "data_size": 63488 00:16:23.480 } 00:16:23.480 ] 00:16:23.480 }' 00:16:23.480 07:13:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.480 07:13:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.739 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:23.739 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.739 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.739 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:23.739 [2024-11-20 07:13:21.030656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:23.739 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:23.998 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:24.259 [2024-11-20 07:13:21.418498] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:24.259 /dev/nbd0 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:24.259 1+0 records in 00:16:24.259 1+0 records out 00:16:24.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547712 s, 7.5 MB/s 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:24.259 07:13:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:16:30.868 63488+0 records in 00:16:30.868 63488+0 records out 00:16:30.868 32505856 bytes (33 MB, 31 MiB) copied, 6.24173 s, 5.2 MB/s 00:16:30.868 07:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:30.868 07:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:30.868 07:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:30.868 07:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:30.868 07:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:30.868 07:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.868 07:13:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:30.868 [2024-11-20 07:13:28.030369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.868 [2024-11-20 07:13:28.062468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.868 "name": "raid_bdev1", 00:16:30.868 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:30.868 "strip_size_kb": 0, 00:16:30.868 "state": "online", 00:16:30.868 "raid_level": "raid1", 00:16:30.868 "superblock": true, 00:16:30.868 "num_base_bdevs": 2, 00:16:30.868 "num_base_bdevs_discovered": 1, 00:16:30.868 "num_base_bdevs_operational": 1, 00:16:30.868 "base_bdevs_list": [ 00:16:30.868 { 00:16:30.868 "name": null, 00:16:30.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.868 "is_configured": false, 00:16:30.868 "data_offset": 0, 00:16:30.868 "data_size": 63488 00:16:30.868 }, 00:16:30.868 { 00:16:30.868 "name": "BaseBdev2", 00:16:30.868 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:30.868 "is_configured": true, 00:16:30.868 "data_offset": 2048, 00:16:30.868 "data_size": 63488 00:16:30.868 } 00:16:30.868 ] 00:16:30.868 }' 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.868 07:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.436 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:31.436 07:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.436 07:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.436 [2024-11-20 07:13:28.586628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:31.436 [2024-11-20 07:13:28.602986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:16:31.436 07:13:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.436 07:13:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:31.436 [2024-11-20 07:13:28.605451] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:32.371 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:32.371 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.371 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:32.371 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:32.371 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.371 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.371 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.371 07:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.371 07:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.371 07:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.371 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.371 "name": "raid_bdev1", 00:16:32.371 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:32.371 "strip_size_kb": 0, 00:16:32.371 "state": "online", 00:16:32.371 "raid_level": "raid1", 00:16:32.371 "superblock": true, 00:16:32.371 "num_base_bdevs": 2, 00:16:32.371 "num_base_bdevs_discovered": 2, 00:16:32.371 "num_base_bdevs_operational": 2, 00:16:32.371 "process": { 00:16:32.371 "type": "rebuild", 00:16:32.371 "target": "spare", 00:16:32.371 "progress": { 00:16:32.371 "blocks": 20480, 00:16:32.371 "percent": 32 00:16:32.371 } 00:16:32.371 }, 00:16:32.371 "base_bdevs_list": [ 00:16:32.371 { 00:16:32.371 "name": "spare", 00:16:32.371 "uuid": "4c02435a-de97-5134-b353-e824ed7589fc", 00:16:32.371 "is_configured": true, 00:16:32.371 "data_offset": 2048, 00:16:32.371 "data_size": 63488 00:16:32.371 }, 00:16:32.371 { 00:16:32.371 "name": "BaseBdev2", 00:16:32.371 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:32.371 "is_configured": true, 00:16:32.371 "data_offset": 2048, 00:16:32.371 "data_size": 63488 00:16:32.371 } 00:16:32.371 ] 00:16:32.371 }' 00:16:32.371 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.629 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.629 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.629 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.629 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:32.629 07:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.629 07:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.629 [2024-11-20 07:13:29.770840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:32.629 [2024-11-20 07:13:29.814147] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:32.629 [2024-11-20 07:13:29.814247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.630 [2024-11-20 07:13:29.814270] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:32.630 [2024-11-20 07:13:29.814290] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.630 "name": "raid_bdev1", 00:16:32.630 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:32.630 "strip_size_kb": 0, 00:16:32.630 "state": "online", 00:16:32.630 "raid_level": "raid1", 00:16:32.630 "superblock": true, 00:16:32.630 "num_base_bdevs": 2, 00:16:32.630 "num_base_bdevs_discovered": 1, 00:16:32.630 "num_base_bdevs_operational": 1, 00:16:32.630 "base_bdevs_list": [ 00:16:32.630 { 00:16:32.630 "name": null, 00:16:32.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.630 "is_configured": false, 00:16:32.630 "data_offset": 0, 00:16:32.630 "data_size": 63488 00:16:32.630 }, 00:16:32.630 { 00:16:32.630 "name": "BaseBdev2", 00:16:32.630 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:32.630 "is_configured": true, 00:16:32.630 "data_offset": 2048, 00:16:32.630 "data_size": 63488 00:16:32.630 } 00:16:32.630 ] 00:16:32.630 }' 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.630 07:13:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.244 "name": "raid_bdev1", 00:16:33.244 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:33.244 "strip_size_kb": 0, 00:16:33.244 "state": "online", 00:16:33.244 "raid_level": "raid1", 00:16:33.244 "superblock": true, 00:16:33.244 "num_base_bdevs": 2, 00:16:33.244 "num_base_bdevs_discovered": 1, 00:16:33.244 "num_base_bdevs_operational": 1, 00:16:33.244 "base_bdevs_list": [ 00:16:33.244 { 00:16:33.244 "name": null, 00:16:33.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.244 "is_configured": false, 00:16:33.244 "data_offset": 0, 00:16:33.244 "data_size": 63488 00:16:33.244 }, 00:16:33.244 { 00:16:33.244 "name": "BaseBdev2", 00:16:33.244 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:33.244 "is_configured": true, 00:16:33.244 "data_offset": 2048, 00:16:33.244 "data_size": 63488 00:16:33.244 } 00:16:33.244 ] 00:16:33.244 }' 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.244 [2024-11-20 07:13:30.494293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.244 [2024-11-20 07:13:30.509740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:16:33.244 [2024-11-20 07:13:30.512229] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.244 07:13:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:34.630 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.630 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.630 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.630 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.630 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.630 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.630 07:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.630 07:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.630 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.630 07:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.630 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.630 "name": "raid_bdev1", 00:16:34.630 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:34.630 "strip_size_kb": 0, 00:16:34.630 "state": "online", 00:16:34.630 "raid_level": "raid1", 00:16:34.630 "superblock": true, 00:16:34.630 "num_base_bdevs": 2, 00:16:34.630 "num_base_bdevs_discovered": 2, 00:16:34.631 "num_base_bdevs_operational": 2, 00:16:34.631 "process": { 00:16:34.631 "type": "rebuild", 00:16:34.631 "target": "spare", 00:16:34.631 "progress": { 00:16:34.631 "blocks": 20480, 00:16:34.631 "percent": 32 00:16:34.631 } 00:16:34.631 }, 00:16:34.631 "base_bdevs_list": [ 00:16:34.631 { 00:16:34.631 "name": "spare", 00:16:34.631 "uuid": "4c02435a-de97-5134-b353-e824ed7589fc", 00:16:34.631 "is_configured": true, 00:16:34.631 "data_offset": 2048, 00:16:34.631 "data_size": 63488 00:16:34.631 }, 00:16:34.631 { 00:16:34.631 "name": "BaseBdev2", 00:16:34.631 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:34.631 "is_configured": true, 00:16:34.631 "data_offset": 2048, 00:16:34.631 "data_size": 63488 00:16:34.631 } 00:16:34.631 ] 00:16:34.631 }' 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:34.631 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=416 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.631 "name": "raid_bdev1", 00:16:34.631 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:34.631 "strip_size_kb": 0, 00:16:34.631 "state": "online", 00:16:34.631 "raid_level": "raid1", 00:16:34.631 "superblock": true, 00:16:34.631 "num_base_bdevs": 2, 00:16:34.631 "num_base_bdevs_discovered": 2, 00:16:34.631 "num_base_bdevs_operational": 2, 00:16:34.631 "process": { 00:16:34.631 "type": "rebuild", 00:16:34.631 "target": "spare", 00:16:34.631 "progress": { 00:16:34.631 "blocks": 22528, 00:16:34.631 "percent": 35 00:16:34.631 } 00:16:34.631 }, 00:16:34.631 "base_bdevs_list": [ 00:16:34.631 { 00:16:34.631 "name": "spare", 00:16:34.631 "uuid": "4c02435a-de97-5134-b353-e824ed7589fc", 00:16:34.631 "is_configured": true, 00:16:34.631 "data_offset": 2048, 00:16:34.631 "data_size": 63488 00:16:34.631 }, 00:16:34.631 { 00:16:34.631 "name": "BaseBdev2", 00:16:34.631 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:34.631 "is_configured": true, 00:16:34.631 "data_offset": 2048, 00:16:34.631 "data_size": 63488 00:16:34.631 } 00:16:34.631 ] 00:16:34.631 }' 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.631 07:13:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.561 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.561 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.561 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.561 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.561 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.561 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.561 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.561 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.561 07:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.561 07:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.561 07:13:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.818 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.818 "name": "raid_bdev1", 00:16:35.818 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:35.818 "strip_size_kb": 0, 00:16:35.818 "state": "online", 00:16:35.818 "raid_level": "raid1", 00:16:35.818 "superblock": true, 00:16:35.818 "num_base_bdevs": 2, 00:16:35.818 "num_base_bdevs_discovered": 2, 00:16:35.818 "num_base_bdevs_operational": 2, 00:16:35.818 "process": { 00:16:35.818 "type": "rebuild", 00:16:35.818 "target": "spare", 00:16:35.818 "progress": { 00:16:35.818 "blocks": 47104, 00:16:35.818 "percent": 74 00:16:35.818 } 00:16:35.818 }, 00:16:35.818 "base_bdevs_list": [ 00:16:35.818 { 00:16:35.818 "name": "spare", 00:16:35.818 "uuid": "4c02435a-de97-5134-b353-e824ed7589fc", 00:16:35.818 "is_configured": true, 00:16:35.818 "data_offset": 2048, 00:16:35.818 "data_size": 63488 00:16:35.818 }, 00:16:35.818 { 00:16:35.818 "name": "BaseBdev2", 00:16:35.818 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:35.818 "is_configured": true, 00:16:35.818 "data_offset": 2048, 00:16:35.818 "data_size": 63488 00:16:35.818 } 00:16:35.818 ] 00:16:35.818 }' 00:16:35.818 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.818 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.818 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.818 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.818 07:13:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:36.385 [2024-11-20 07:13:33.634443] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:36.385 [2024-11-20 07:13:33.634569] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:36.385 [2024-11-20 07:13:33.634726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.994 07:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.994 07:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.994 07:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.994 07:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.994 07:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.994 07:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.994 07:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.994 07:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.994 07:13:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.994 07:13:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.994 "name": "raid_bdev1", 00:16:36.994 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:36.994 "strip_size_kb": 0, 00:16:36.994 "state": "online", 00:16:36.994 "raid_level": "raid1", 00:16:36.994 "superblock": true, 00:16:36.994 "num_base_bdevs": 2, 00:16:36.994 "num_base_bdevs_discovered": 2, 00:16:36.994 "num_base_bdevs_operational": 2, 00:16:36.994 "base_bdevs_list": [ 00:16:36.994 { 00:16:36.994 "name": "spare", 00:16:36.994 "uuid": "4c02435a-de97-5134-b353-e824ed7589fc", 00:16:36.994 "is_configured": true, 00:16:36.994 "data_offset": 2048, 00:16:36.994 "data_size": 63488 00:16:36.994 }, 00:16:36.994 { 00:16:36.994 "name": "BaseBdev2", 00:16:36.994 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:36.994 "is_configured": true, 00:16:36.994 "data_offset": 2048, 00:16:36.994 "data_size": 63488 00:16:36.994 } 00:16:36.994 ] 00:16:36.994 }' 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.994 "name": "raid_bdev1", 00:16:36.994 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:36.994 "strip_size_kb": 0, 00:16:36.994 "state": "online", 00:16:36.994 "raid_level": "raid1", 00:16:36.994 "superblock": true, 00:16:36.994 "num_base_bdevs": 2, 00:16:36.994 "num_base_bdevs_discovered": 2, 00:16:36.994 "num_base_bdevs_operational": 2, 00:16:36.994 "base_bdevs_list": [ 00:16:36.994 { 00:16:36.994 "name": "spare", 00:16:36.994 "uuid": "4c02435a-de97-5134-b353-e824ed7589fc", 00:16:36.994 "is_configured": true, 00:16:36.994 "data_offset": 2048, 00:16:36.994 "data_size": 63488 00:16:36.994 }, 00:16:36.994 { 00:16:36.994 "name": "BaseBdev2", 00:16:36.994 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:36.994 "is_configured": true, 00:16:36.994 "data_offset": 2048, 00:16:36.994 "data_size": 63488 00:16:36.994 } 00:16:36.994 ] 00:16:36.994 }' 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:36.994 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.291 "name": "raid_bdev1", 00:16:37.291 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:37.291 "strip_size_kb": 0, 00:16:37.291 "state": "online", 00:16:37.291 "raid_level": "raid1", 00:16:37.291 "superblock": true, 00:16:37.291 "num_base_bdevs": 2, 00:16:37.291 "num_base_bdevs_discovered": 2, 00:16:37.291 "num_base_bdevs_operational": 2, 00:16:37.291 "base_bdevs_list": [ 00:16:37.291 { 00:16:37.291 "name": "spare", 00:16:37.291 "uuid": "4c02435a-de97-5134-b353-e824ed7589fc", 00:16:37.291 "is_configured": true, 00:16:37.291 "data_offset": 2048, 00:16:37.291 "data_size": 63488 00:16:37.291 }, 00:16:37.291 { 00:16:37.291 "name": "BaseBdev2", 00:16:37.291 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:37.291 "is_configured": true, 00:16:37.291 "data_offset": 2048, 00:16:37.291 "data_size": 63488 00:16:37.291 } 00:16:37.291 ] 00:16:37.291 }' 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.291 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.550 [2024-11-20 07:13:34.803758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.550 [2024-11-20 07:13:34.803805] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.550 [2024-11-20 07:13:34.803919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.550 [2024-11-20 07:13:34.804024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.550 [2024-11-20 07:13:34.804042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:37.550 07:13:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:38.118 /dev/nbd0 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:38.118 1+0 records in 00:16:38.118 1+0 records out 00:16:38.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265064 s, 15.5 MB/s 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:38.118 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:38.377 /dev/nbd1 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:38.377 1+0 records in 00:16:38.377 1+0 records out 00:16:38.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003571 s, 11.5 MB/s 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:38.377 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:38.636 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:38.636 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:38.636 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:38.636 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:38.636 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:38.636 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.636 07:13:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:38.895 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:38.895 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:38.895 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:38.895 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.895 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.895 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:38.895 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:38.895 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.895 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.895 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.154 [2024-11-20 07:13:36.390284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:39.154 [2024-11-20 07:13:36.390357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.154 [2024-11-20 07:13:36.390403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:39.154 [2024-11-20 07:13:36.390419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.154 [2024-11-20 07:13:36.393324] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.154 [2024-11-20 07:13:36.393371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:39.154 [2024-11-20 07:13:36.393489] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:39.154 [2024-11-20 07:13:36.393554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.154 [2024-11-20 07:13:36.393736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:39.154 spare 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.154 07:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.413 [2024-11-20 07:13:36.493880] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:39.413 [2024-11-20 07:13:36.493949] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:39.413 [2024-11-20 07:13:36.494364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:16:39.413 [2024-11-20 07:13:36.494631] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:39.413 [2024-11-20 07:13:36.494661] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:39.413 [2024-11-20 07:13:36.494928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.413 "name": "raid_bdev1", 00:16:39.413 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:39.413 "strip_size_kb": 0, 00:16:39.413 "state": "online", 00:16:39.413 "raid_level": "raid1", 00:16:39.413 "superblock": true, 00:16:39.413 "num_base_bdevs": 2, 00:16:39.413 "num_base_bdevs_discovered": 2, 00:16:39.413 "num_base_bdevs_operational": 2, 00:16:39.413 "base_bdevs_list": [ 00:16:39.413 { 00:16:39.413 "name": "spare", 00:16:39.413 "uuid": "4c02435a-de97-5134-b353-e824ed7589fc", 00:16:39.413 "is_configured": true, 00:16:39.413 "data_offset": 2048, 00:16:39.413 "data_size": 63488 00:16:39.413 }, 00:16:39.413 { 00:16:39.413 "name": "BaseBdev2", 00:16:39.413 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:39.413 "is_configured": true, 00:16:39.413 "data_offset": 2048, 00:16:39.413 "data_size": 63488 00:16:39.413 } 00:16:39.413 ] 00:16:39.413 }' 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.413 07:13:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.982 "name": "raid_bdev1", 00:16:39.982 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:39.982 "strip_size_kb": 0, 00:16:39.982 "state": "online", 00:16:39.982 "raid_level": "raid1", 00:16:39.982 "superblock": true, 00:16:39.982 "num_base_bdevs": 2, 00:16:39.982 "num_base_bdevs_discovered": 2, 00:16:39.982 "num_base_bdevs_operational": 2, 00:16:39.982 "base_bdevs_list": [ 00:16:39.982 { 00:16:39.982 "name": "spare", 00:16:39.982 "uuid": "4c02435a-de97-5134-b353-e824ed7589fc", 00:16:39.982 "is_configured": true, 00:16:39.982 "data_offset": 2048, 00:16:39.982 "data_size": 63488 00:16:39.982 }, 00:16:39.982 { 00:16:39.982 "name": "BaseBdev2", 00:16:39.982 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:39.982 "is_configured": true, 00:16:39.982 "data_offset": 2048, 00:16:39.982 "data_size": 63488 00:16:39.982 } 00:16:39.982 ] 00:16:39.982 }' 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.982 [2024-11-20 07:13:37.211053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.982 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.982 "name": "raid_bdev1", 00:16:39.982 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:39.982 "strip_size_kb": 0, 00:16:39.982 "state": "online", 00:16:39.982 "raid_level": "raid1", 00:16:39.982 "superblock": true, 00:16:39.982 "num_base_bdevs": 2, 00:16:39.982 "num_base_bdevs_discovered": 1, 00:16:39.982 "num_base_bdevs_operational": 1, 00:16:39.982 "base_bdevs_list": [ 00:16:39.983 { 00:16:39.983 "name": null, 00:16:39.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.983 "is_configured": false, 00:16:39.983 "data_offset": 0, 00:16:39.983 "data_size": 63488 00:16:39.983 }, 00:16:39.983 { 00:16:39.983 "name": "BaseBdev2", 00:16:39.983 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:39.983 "is_configured": true, 00:16:39.983 "data_offset": 2048, 00:16:39.983 "data_size": 63488 00:16:39.983 } 00:16:39.983 ] 00:16:39.983 }' 00:16:39.983 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.983 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.549 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:40.549 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.549 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.549 [2024-11-20 07:13:37.767243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:40.549 [2024-11-20 07:13:37.767479] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:40.549 [2024-11-20 07:13:37.767509] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:40.549 [2024-11-20 07:13:37.767561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:40.549 [2024-11-20 07:13:37.783017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:16:40.549 07:13:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.549 07:13:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:40.549 [2024-11-20 07:13:37.785466] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:41.487 07:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.487 07:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.487 07:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.487 07:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.487 07:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.487 07:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.487 07:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.487 07:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.487 07:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.746 07:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.746 07:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.746 "name": "raid_bdev1", 00:16:41.746 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:41.746 "strip_size_kb": 0, 00:16:41.746 "state": "online", 00:16:41.746 "raid_level": "raid1", 00:16:41.746 "superblock": true, 00:16:41.746 "num_base_bdevs": 2, 00:16:41.746 "num_base_bdevs_discovered": 2, 00:16:41.746 "num_base_bdevs_operational": 2, 00:16:41.746 "process": { 00:16:41.746 "type": "rebuild", 00:16:41.746 "target": "spare", 00:16:41.746 "progress": { 00:16:41.746 "blocks": 20480, 00:16:41.746 "percent": 32 00:16:41.746 } 00:16:41.746 }, 00:16:41.746 "base_bdevs_list": [ 00:16:41.746 { 00:16:41.746 "name": "spare", 00:16:41.746 "uuid": "4c02435a-de97-5134-b353-e824ed7589fc", 00:16:41.746 "is_configured": true, 00:16:41.746 "data_offset": 2048, 00:16:41.746 "data_size": 63488 00:16:41.746 }, 00:16:41.746 { 00:16:41.746 "name": "BaseBdev2", 00:16:41.746 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:41.746 "is_configured": true, 00:16:41.746 "data_offset": 2048, 00:16:41.746 "data_size": 63488 00:16:41.746 } 00:16:41.746 ] 00:16:41.746 }' 00:16:41.746 07:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.746 07:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.746 07:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.746 07:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.746 07:13:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:41.746 07:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.746 07:13:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.746 [2024-11-20 07:13:38.963089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:41.746 [2024-11-20 07:13:38.994207] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:41.746 [2024-11-20 07:13:38.994299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.746 [2024-11-20 07:13:38.994322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:41.746 [2024-11-20 07:13:38.994337] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.746 07:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.004 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.004 "name": "raid_bdev1", 00:16:42.004 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:42.004 "strip_size_kb": 0, 00:16:42.004 "state": "online", 00:16:42.004 "raid_level": "raid1", 00:16:42.004 "superblock": true, 00:16:42.004 "num_base_bdevs": 2, 00:16:42.004 "num_base_bdevs_discovered": 1, 00:16:42.004 "num_base_bdevs_operational": 1, 00:16:42.004 "base_bdevs_list": [ 00:16:42.004 { 00:16:42.004 "name": null, 00:16:42.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.004 "is_configured": false, 00:16:42.004 "data_offset": 0, 00:16:42.004 "data_size": 63488 00:16:42.004 }, 00:16:42.004 { 00:16:42.004 "name": "BaseBdev2", 00:16:42.004 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:42.004 "is_configured": true, 00:16:42.004 "data_offset": 2048, 00:16:42.004 "data_size": 63488 00:16:42.004 } 00:16:42.004 ] 00:16:42.004 }' 00:16:42.004 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.004 07:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.262 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:42.262 07:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.262 07:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.262 [2024-11-20 07:13:39.526109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:42.262 [2024-11-20 07:13:39.526189] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.262 [2024-11-20 07:13:39.526219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:42.262 [2024-11-20 07:13:39.526237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.262 [2024-11-20 07:13:39.526826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.262 [2024-11-20 07:13:39.526891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:42.262 [2024-11-20 07:13:39.527024] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:42.262 [2024-11-20 07:13:39.527049] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:42.262 [2024-11-20 07:13:39.527064] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:42.262 [2024-11-20 07:13:39.527102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:42.262 [2024-11-20 07:13:39.542360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:42.262 spare 00:16:42.262 07:13:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.262 07:13:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:42.262 [2024-11-20 07:13:39.544796] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:43.639 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.639 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.639 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.639 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.639 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.639 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.639 07:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.639 07:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.639 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.639 07:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.639 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.639 "name": "raid_bdev1", 00:16:43.639 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:43.639 "strip_size_kb": 0, 00:16:43.639 "state": "online", 00:16:43.639 "raid_level": "raid1", 00:16:43.639 "superblock": true, 00:16:43.639 "num_base_bdevs": 2, 00:16:43.639 "num_base_bdevs_discovered": 2, 00:16:43.639 "num_base_bdevs_operational": 2, 00:16:43.639 "process": { 00:16:43.639 "type": "rebuild", 00:16:43.639 "target": "spare", 00:16:43.639 "progress": { 00:16:43.639 "blocks": 20480, 00:16:43.639 "percent": 32 00:16:43.639 } 00:16:43.639 }, 00:16:43.639 "base_bdevs_list": [ 00:16:43.639 { 00:16:43.639 "name": "spare", 00:16:43.639 "uuid": "4c02435a-de97-5134-b353-e824ed7589fc", 00:16:43.639 "is_configured": true, 00:16:43.639 "data_offset": 2048, 00:16:43.639 "data_size": 63488 00:16:43.639 }, 00:16:43.640 { 00:16:43.640 "name": "BaseBdev2", 00:16:43.640 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:43.640 "is_configured": true, 00:16:43.640 "data_offset": 2048, 00:16:43.640 "data_size": 63488 00:16:43.640 } 00:16:43.640 ] 00:16:43.640 }' 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.640 [2024-11-20 07:13:40.718365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.640 [2024-11-20 07:13:40.753561] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:43.640 [2024-11-20 07:13:40.753640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.640 [2024-11-20 07:13:40.753667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:43.640 [2024-11-20 07:13:40.753679] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.640 "name": "raid_bdev1", 00:16:43.640 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:43.640 "strip_size_kb": 0, 00:16:43.640 "state": "online", 00:16:43.640 "raid_level": "raid1", 00:16:43.640 "superblock": true, 00:16:43.640 "num_base_bdevs": 2, 00:16:43.640 "num_base_bdevs_discovered": 1, 00:16:43.640 "num_base_bdevs_operational": 1, 00:16:43.640 "base_bdevs_list": [ 00:16:43.640 { 00:16:43.640 "name": null, 00:16:43.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.640 "is_configured": false, 00:16:43.640 "data_offset": 0, 00:16:43.640 "data_size": 63488 00:16:43.640 }, 00:16:43.640 { 00:16:43.640 "name": "BaseBdev2", 00:16:43.640 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:43.640 "is_configured": true, 00:16:43.640 "data_offset": 2048, 00:16:43.640 "data_size": 63488 00:16:43.640 } 00:16:43.640 ] 00:16:43.640 }' 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.640 07:13:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.208 "name": "raid_bdev1", 00:16:44.208 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:44.208 "strip_size_kb": 0, 00:16:44.208 "state": "online", 00:16:44.208 "raid_level": "raid1", 00:16:44.208 "superblock": true, 00:16:44.208 "num_base_bdevs": 2, 00:16:44.208 "num_base_bdevs_discovered": 1, 00:16:44.208 "num_base_bdevs_operational": 1, 00:16:44.208 "base_bdevs_list": [ 00:16:44.208 { 00:16:44.208 "name": null, 00:16:44.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.208 "is_configured": false, 00:16:44.208 "data_offset": 0, 00:16:44.208 "data_size": 63488 00:16:44.208 }, 00:16:44.208 { 00:16:44.208 "name": "BaseBdev2", 00:16:44.208 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:44.208 "is_configured": true, 00:16:44.208 "data_offset": 2048, 00:16:44.208 "data_size": 63488 00:16:44.208 } 00:16:44.208 ] 00:16:44.208 }' 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.208 [2024-11-20 07:13:41.453494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:44.208 [2024-11-20 07:13:41.453554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.208 [2024-11-20 07:13:41.453586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:44.208 [2024-11-20 07:13:41.453612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.208 [2024-11-20 07:13:41.454175] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.208 [2024-11-20 07:13:41.454207] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:44.208 [2024-11-20 07:13:41.454313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:44.208 [2024-11-20 07:13:41.454335] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:44.208 [2024-11-20 07:13:41.454349] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:44.208 [2024-11-20 07:13:41.454363] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:44.208 BaseBdev1 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.208 07:13:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:45.144 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:45.144 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.144 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.144 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.144 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.144 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:45.144 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.144 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.403 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.403 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.403 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.403 07:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.403 07:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.403 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.403 07:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.403 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.403 "name": "raid_bdev1", 00:16:45.403 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:45.403 "strip_size_kb": 0, 00:16:45.403 "state": "online", 00:16:45.403 "raid_level": "raid1", 00:16:45.403 "superblock": true, 00:16:45.403 "num_base_bdevs": 2, 00:16:45.403 "num_base_bdevs_discovered": 1, 00:16:45.403 "num_base_bdevs_operational": 1, 00:16:45.403 "base_bdevs_list": [ 00:16:45.403 { 00:16:45.403 "name": null, 00:16:45.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.403 "is_configured": false, 00:16:45.403 "data_offset": 0, 00:16:45.403 "data_size": 63488 00:16:45.403 }, 00:16:45.403 { 00:16:45.403 "name": "BaseBdev2", 00:16:45.403 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:45.403 "is_configured": true, 00:16:45.403 "data_offset": 2048, 00:16:45.403 "data_size": 63488 00:16:45.403 } 00:16:45.403 ] 00:16:45.403 }' 00:16:45.403 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.403 07:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.662 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:45.662 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.662 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:45.662 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:45.662 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.662 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.662 07:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.662 07:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.662 07:13:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.921 07:13:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.921 "name": "raid_bdev1", 00:16:45.921 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:45.921 "strip_size_kb": 0, 00:16:45.921 "state": "online", 00:16:45.921 "raid_level": "raid1", 00:16:45.921 "superblock": true, 00:16:45.921 "num_base_bdevs": 2, 00:16:45.921 "num_base_bdevs_discovered": 1, 00:16:45.921 "num_base_bdevs_operational": 1, 00:16:45.921 "base_bdevs_list": [ 00:16:45.921 { 00:16:45.921 "name": null, 00:16:45.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.921 "is_configured": false, 00:16:45.921 "data_offset": 0, 00:16:45.921 "data_size": 63488 00:16:45.921 }, 00:16:45.921 { 00:16:45.921 "name": "BaseBdev2", 00:16:45.921 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:45.921 "is_configured": true, 00:16:45.921 "data_offset": 2048, 00:16:45.921 "data_size": 63488 00:16:45.921 } 00:16:45.921 ] 00:16:45.921 }' 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.921 [2024-11-20 07:13:43.162040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.921 [2024-11-20 07:13:43.162246] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:45.921 [2024-11-20 07:13:43.162271] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:45.921 request: 00:16:45.921 { 00:16:45.921 "base_bdev": "BaseBdev1", 00:16:45.921 "raid_bdev": "raid_bdev1", 00:16:45.921 "method": "bdev_raid_add_base_bdev", 00:16:45.921 "req_id": 1 00:16:45.921 } 00:16:45.921 Got JSON-RPC error response 00:16:45.921 response: 00:16:45.921 { 00:16:45.921 "code": -22, 00:16:45.921 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:45.921 } 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:45.921 07:13:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:46.858 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:46.858 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:46.858 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:46.858 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:46.858 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:46.858 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:46.858 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:46.858 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:46.858 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:46.858 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.117 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.117 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.118 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.118 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.118 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.118 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.118 "name": "raid_bdev1", 00:16:47.118 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:47.118 "strip_size_kb": 0, 00:16:47.118 "state": "online", 00:16:47.118 "raid_level": "raid1", 00:16:47.118 "superblock": true, 00:16:47.118 "num_base_bdevs": 2, 00:16:47.118 "num_base_bdevs_discovered": 1, 00:16:47.118 "num_base_bdevs_operational": 1, 00:16:47.118 "base_bdevs_list": [ 00:16:47.118 { 00:16:47.118 "name": null, 00:16:47.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.118 "is_configured": false, 00:16:47.118 "data_offset": 0, 00:16:47.118 "data_size": 63488 00:16:47.118 }, 00:16:47.118 { 00:16:47.118 "name": "BaseBdev2", 00:16:47.118 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:47.118 "is_configured": true, 00:16:47.118 "data_offset": 2048, 00:16:47.118 "data_size": 63488 00:16:47.118 } 00:16:47.118 ] 00:16:47.118 }' 00:16:47.118 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.118 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.717 "name": "raid_bdev1", 00:16:47.717 "uuid": "827b3b85-713e-4ed8-8069-009c170b811a", 00:16:47.717 "strip_size_kb": 0, 00:16:47.717 "state": "online", 00:16:47.717 "raid_level": "raid1", 00:16:47.717 "superblock": true, 00:16:47.717 "num_base_bdevs": 2, 00:16:47.717 "num_base_bdevs_discovered": 1, 00:16:47.717 "num_base_bdevs_operational": 1, 00:16:47.717 "base_bdevs_list": [ 00:16:47.717 { 00:16:47.717 "name": null, 00:16:47.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.717 "is_configured": false, 00:16:47.717 "data_offset": 0, 00:16:47.717 "data_size": 63488 00:16:47.717 }, 00:16:47.717 { 00:16:47.717 "name": "BaseBdev2", 00:16:47.717 "uuid": "c12405f5-3d3b-5f18-a6ba-d648bff1b096", 00:16:47.717 "is_configured": true, 00:16:47.717 "data_offset": 2048, 00:16:47.717 "data_size": 63488 00:16:47.717 } 00:16:47.717 ] 00:16:47.717 }' 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.717 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.718 07:13:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75844 00:16:47.718 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75844 ']' 00:16:47.718 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75844 00:16:47.718 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:47.718 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:47.718 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75844 00:16:47.718 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:47.718 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:47.718 killing process with pid 75844 00:16:47.718 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75844' 00:16:47.718 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75844 00:16:47.718 Received shutdown signal, test time was about 60.000000 seconds 00:16:47.718 00:16:47.718 Latency(us) 00:16:47.718 [2024-11-20T07:13:45.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.718 [2024-11-20T07:13:45.038Z] =================================================================================================================== 00:16:47.718 [2024-11-20T07:13:45.038Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:47.718 [2024-11-20 07:13:44.899489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.718 [2024-11-20 07:13:44.899644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.718 07:13:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75844 00:16:47.718 [2024-11-20 07:13:44.899712] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.718 [2024-11-20 07:13:44.899733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:47.976 [2024-11-20 07:13:45.165853] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.353 07:13:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:49.353 00:16:49.353 real 0m27.118s 00:16:49.353 user 0m33.249s 00:16:49.353 sys 0m4.033s 00:16:49.353 07:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.353 07:13:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:49.353 ************************************ 00:16:49.354 END TEST raid_rebuild_test_sb 00:16:49.354 ************************************ 00:16:49.354 07:13:46 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:16:49.354 07:13:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:49.354 07:13:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:49.354 07:13:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:49.354 ************************************ 00:16:49.354 START TEST raid_rebuild_test_io 00:16:49.354 ************************************ 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76606 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76606 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76606 ']' 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.354 07:13:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.354 [2024-11-20 07:13:46.452183] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:16:49.354 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:49.354 Zero copy mechanism will not be used. 00:16:49.354 [2024-11-20 07:13:46.452342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76606 ] 00:16:49.354 [2024-11-20 07:13:46.628522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.612 [2024-11-20 07:13:46.758646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.870 [2024-11-20 07:13:46.965326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.870 [2024-11-20 07:13:46.965408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.436 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.437 BaseBdev1_malloc 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.437 [2024-11-20 07:13:47.536398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:50.437 [2024-11-20 07:13:47.536483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.437 [2024-11-20 07:13:47.536537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:50.437 [2024-11-20 07:13:47.536576] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.437 [2024-11-20 07:13:47.539498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.437 [2024-11-20 07:13:47.539551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:50.437 BaseBdev1 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.437 BaseBdev2_malloc 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.437 [2024-11-20 07:13:47.588934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:50.437 [2024-11-20 07:13:47.589032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.437 [2024-11-20 07:13:47.589061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:50.437 [2024-11-20 07:13:47.589081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.437 [2024-11-20 07:13:47.592291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.437 [2024-11-20 07:13:47.592341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:50.437 BaseBdev2 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.437 spare_malloc 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.437 spare_delay 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.437 [2024-11-20 07:13:47.661564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:50.437 [2024-11-20 07:13:47.661636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.437 [2024-11-20 07:13:47.661667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:50.437 [2024-11-20 07:13:47.661685] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.437 [2024-11-20 07:13:47.664530] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.437 [2024-11-20 07:13:47.664578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:50.437 spare 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.437 [2024-11-20 07:13:47.669625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.437 [2024-11-20 07:13:47.672031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.437 [2024-11-20 07:13:47.672158] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:50.437 [2024-11-20 07:13:47.672181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:50.437 [2024-11-20 07:13:47.672508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:50.437 [2024-11-20 07:13:47.672725] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:50.437 [2024-11-20 07:13:47.672753] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:50.437 [2024-11-20 07:13:47.672962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.437 "name": "raid_bdev1", 00:16:50.437 "uuid": "cbc4aa51-347d-4f03-8f34-796d8546a21c", 00:16:50.437 "strip_size_kb": 0, 00:16:50.437 "state": "online", 00:16:50.437 "raid_level": "raid1", 00:16:50.437 "superblock": false, 00:16:50.437 "num_base_bdevs": 2, 00:16:50.437 "num_base_bdevs_discovered": 2, 00:16:50.437 "num_base_bdevs_operational": 2, 00:16:50.437 "base_bdevs_list": [ 00:16:50.437 { 00:16:50.437 "name": "BaseBdev1", 00:16:50.437 "uuid": "dcf50586-b4ea-57bb-9ca5-4435362a8da0", 00:16:50.437 "is_configured": true, 00:16:50.437 "data_offset": 0, 00:16:50.437 "data_size": 65536 00:16:50.437 }, 00:16:50.437 { 00:16:50.437 "name": "BaseBdev2", 00:16:50.437 "uuid": "a4519464-bcf1-55b0-8f5b-4b49ca4731e8", 00:16:50.437 "is_configured": true, 00:16:50.437 "data_offset": 0, 00:16:50.437 "data_size": 65536 00:16:50.437 } 00:16:50.437 ] 00:16:50.437 }' 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.437 07:13:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.003 [2024-11-20 07:13:48.186106] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.003 [2024-11-20 07:13:48.289755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.003 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.262 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.262 "name": "raid_bdev1", 00:16:51.262 "uuid": "cbc4aa51-347d-4f03-8f34-796d8546a21c", 00:16:51.262 "strip_size_kb": 0, 00:16:51.262 "state": "online", 00:16:51.262 "raid_level": "raid1", 00:16:51.262 "superblock": false, 00:16:51.262 "num_base_bdevs": 2, 00:16:51.262 "num_base_bdevs_discovered": 1, 00:16:51.262 "num_base_bdevs_operational": 1, 00:16:51.262 "base_bdevs_list": [ 00:16:51.262 { 00:16:51.262 "name": null, 00:16:51.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.262 "is_configured": false, 00:16:51.262 "data_offset": 0, 00:16:51.262 "data_size": 65536 00:16:51.262 }, 00:16:51.262 { 00:16:51.262 "name": "BaseBdev2", 00:16:51.262 "uuid": "a4519464-bcf1-55b0-8f5b-4b49ca4731e8", 00:16:51.262 "is_configured": true, 00:16:51.262 "data_offset": 0, 00:16:51.262 "data_size": 65536 00:16:51.262 } 00:16:51.262 ] 00:16:51.262 }' 00:16:51.262 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.262 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.262 [2024-11-20 07:13:48.421916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:51.262 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:51.262 Zero copy mechanism will not be used. 00:16:51.262 Running I/O for 60 seconds... 00:16:51.520 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:51.520 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.520 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.520 [2024-11-20 07:13:48.821196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:51.779 07:13:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.779 07:13:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:51.779 [2024-11-20 07:13:48.867914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:51.779 [2024-11-20 07:13:48.870479] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:51.779 [2024-11-20 07:13:48.996279] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:51.779 [2024-11-20 07:13:48.997017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:52.037 [2024-11-20 07:13:49.124625] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:52.037 [2024-11-20 07:13:49.125032] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:52.296 186.00 IOPS, 558.00 MiB/s [2024-11-20T07:13:49.616Z] [2024-11-20 07:13:49.435473] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:52.296 [2024-11-20 07:13:49.555600] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:52.296 [2024-11-20 07:13:49.556322] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:52.555 07:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.555 07:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.555 07:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.555 07:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.555 07:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.555 07:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.555 07:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.555 07:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.555 07:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.897 07:13:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.897 [2024-11-20 07:13:49.898081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:52.897 07:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.897 "name": "raid_bdev1", 00:16:52.897 "uuid": "cbc4aa51-347d-4f03-8f34-796d8546a21c", 00:16:52.897 "strip_size_kb": 0, 00:16:52.897 "state": "online", 00:16:52.897 "raid_level": "raid1", 00:16:52.897 "superblock": false, 00:16:52.897 "num_base_bdevs": 2, 00:16:52.897 "num_base_bdevs_discovered": 2, 00:16:52.897 "num_base_bdevs_operational": 2, 00:16:52.897 "process": { 00:16:52.897 "type": "rebuild", 00:16:52.897 "target": "spare", 00:16:52.897 "progress": { 00:16:52.897 "blocks": 12288, 00:16:52.897 "percent": 18 00:16:52.897 } 00:16:52.897 }, 00:16:52.897 "base_bdevs_list": [ 00:16:52.897 { 00:16:52.897 "name": "spare", 00:16:52.897 "uuid": "9f291e06-afc4-5963-8d95-c2206b6440a5", 00:16:52.897 "is_configured": true, 00:16:52.897 "data_offset": 0, 00:16:52.897 "data_size": 65536 00:16:52.897 }, 00:16:52.897 { 00:16:52.897 "name": "BaseBdev2", 00:16:52.897 "uuid": "a4519464-bcf1-55b0-8f5b-4b49ca4731e8", 00:16:52.897 "is_configured": true, 00:16:52.897 "data_offset": 0, 00:16:52.897 "data_size": 65536 00:16:52.897 } 00:16:52.897 ] 00:16:52.897 }' 00:16:52.897 07:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.897 07:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.897 07:13:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.897 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.897 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:52.897 07:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.897 07:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.897 [2024-11-20 07:13:50.077071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.897 [2024-11-20 07:13:50.125686] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:52.897 [2024-11-20 07:13:50.126113] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:53.186 [2024-11-20 07:13:50.243681] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:53.186 [2024-11-20 07:13:50.254659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.186 [2024-11-20 07:13:50.254707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:53.186 [2024-11-20 07:13:50.254732] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:53.186 [2024-11-20 07:13:50.298316] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.186 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.186 "name": "raid_bdev1", 00:16:53.186 "uuid": "cbc4aa51-347d-4f03-8f34-796d8546a21c", 00:16:53.186 "strip_size_kb": 0, 00:16:53.186 "state": "online", 00:16:53.186 "raid_level": "raid1", 00:16:53.186 "superblock": false, 00:16:53.186 "num_base_bdevs": 2, 00:16:53.186 "num_base_bdevs_discovered": 1, 00:16:53.186 "num_base_bdevs_operational": 1, 00:16:53.186 "base_bdevs_list": [ 00:16:53.186 { 00:16:53.186 "name": null, 00:16:53.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.186 "is_configured": false, 00:16:53.186 "data_offset": 0, 00:16:53.186 "data_size": 65536 00:16:53.186 }, 00:16:53.186 { 00:16:53.186 "name": "BaseBdev2", 00:16:53.187 "uuid": "a4519464-bcf1-55b0-8f5b-4b49ca4731e8", 00:16:53.187 "is_configured": true, 00:16:53.187 "data_offset": 0, 00:16:53.187 "data_size": 65536 00:16:53.187 } 00:16:53.187 ] 00:16:53.187 }' 00:16:53.187 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.187 07:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.754 132.00 IOPS, 396.00 MiB/s [2024-11-20T07:13:51.074Z] 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.754 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.754 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.754 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.754 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.754 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.754 07:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.754 07:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.754 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.754 07:13:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.754 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.754 "name": "raid_bdev1", 00:16:53.754 "uuid": "cbc4aa51-347d-4f03-8f34-796d8546a21c", 00:16:53.754 "strip_size_kb": 0, 00:16:53.754 "state": "online", 00:16:53.754 "raid_level": "raid1", 00:16:53.754 "superblock": false, 00:16:53.754 "num_base_bdevs": 2, 00:16:53.754 "num_base_bdevs_discovered": 1, 00:16:53.754 "num_base_bdevs_operational": 1, 00:16:53.754 "base_bdevs_list": [ 00:16:53.754 { 00:16:53.754 "name": null, 00:16:53.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.754 "is_configured": false, 00:16:53.754 "data_offset": 0, 00:16:53.754 "data_size": 65536 00:16:53.754 }, 00:16:53.754 { 00:16:53.754 "name": "BaseBdev2", 00:16:53.754 "uuid": "a4519464-bcf1-55b0-8f5b-4b49ca4731e8", 00:16:53.754 "is_configured": true, 00:16:53.754 "data_offset": 0, 00:16:53.754 "data_size": 65536 00:16:53.754 } 00:16:53.754 ] 00:16:53.754 }' 00:16:53.754 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.754 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.754 07:13:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.754 07:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.754 07:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:53.754 07:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.754 07:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.754 [2024-11-20 07:13:51.025814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.013 07:13:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.013 07:13:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:54.013 [2024-11-20 07:13:51.114059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:54.013 [2024-11-20 07:13:51.116685] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.013 [2024-11-20 07:13:51.235631] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:54.013 [2024-11-20 07:13:51.236624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:54.272 146.67 IOPS, 440.00 MiB/s [2024-11-20T07:13:51.592Z] [2024-11-20 07:13:51.464295] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:54.272 [2024-11-20 07:13:51.464981] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:54.847 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.847 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.847 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.847 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.847 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.847 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.847 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.847 07:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.847 07:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.847 [2024-11-20 07:13:52.094730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:54.847 07:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.847 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.847 "name": "raid_bdev1", 00:16:54.847 "uuid": "cbc4aa51-347d-4f03-8f34-796d8546a21c", 00:16:54.847 "strip_size_kb": 0, 00:16:54.847 "state": "online", 00:16:54.847 "raid_level": "raid1", 00:16:54.847 "superblock": false, 00:16:54.847 "num_base_bdevs": 2, 00:16:54.847 "num_base_bdevs_discovered": 2, 00:16:54.847 "num_base_bdevs_operational": 2, 00:16:54.847 "process": { 00:16:54.847 "type": "rebuild", 00:16:54.847 "target": "spare", 00:16:54.847 "progress": { 00:16:54.847 "blocks": 12288, 00:16:54.847 "percent": 18 00:16:54.847 } 00:16:54.847 }, 00:16:54.847 "base_bdevs_list": [ 00:16:54.847 { 00:16:54.847 "name": "spare", 00:16:54.847 "uuid": "9f291e06-afc4-5963-8d95-c2206b6440a5", 00:16:54.847 "is_configured": true, 00:16:54.847 "data_offset": 0, 00:16:54.847 "data_size": 65536 00:16:54.847 }, 00:16:54.847 { 00:16:54.847 "name": "BaseBdev2", 00:16:54.847 "uuid": "a4519464-bcf1-55b0-8f5b-4b49ca4731e8", 00:16:54.847 "is_configured": true, 00:16:54.847 "data_offset": 0, 00:16:54.847 "data_size": 65536 00:16:54.847 } 00:16:54.847 ] 00:16:54.847 }' 00:16:54.847 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=437 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.106 "name": "raid_bdev1", 00:16:55.106 "uuid": "cbc4aa51-347d-4f03-8f34-796d8546a21c", 00:16:55.106 "strip_size_kb": 0, 00:16:55.106 "state": "online", 00:16:55.106 "raid_level": "raid1", 00:16:55.106 "superblock": false, 00:16:55.106 "num_base_bdevs": 2, 00:16:55.106 "num_base_bdevs_discovered": 2, 00:16:55.106 "num_base_bdevs_operational": 2, 00:16:55.106 "process": { 00:16:55.106 "type": "rebuild", 00:16:55.106 "target": "spare", 00:16:55.106 "progress": { 00:16:55.106 "blocks": 14336, 00:16:55.106 "percent": 21 00:16:55.106 } 00:16:55.106 }, 00:16:55.106 "base_bdevs_list": [ 00:16:55.106 { 00:16:55.106 "name": "spare", 00:16:55.106 "uuid": "9f291e06-afc4-5963-8d95-c2206b6440a5", 00:16:55.106 "is_configured": true, 00:16:55.106 "data_offset": 0, 00:16:55.106 "data_size": 65536 00:16:55.106 }, 00:16:55.106 { 00:16:55.106 "name": "BaseBdev2", 00:16:55.106 "uuid": "a4519464-bcf1-55b0-8f5b-4b49ca4731e8", 00:16:55.106 "is_configured": true, 00:16:55.106 "data_offset": 0, 00:16:55.106 "data_size": 65536 00:16:55.106 } 00:16:55.106 ] 00:16:55.106 }' 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.106 [2024-11-20 07:13:52.332969] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:55.106 [2024-11-20 07:13:52.333666] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.106 07:13:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:55.365 132.75 IOPS, 398.25 MiB/s [2024-11-20T07:13:52.685Z] [2024-11-20 07:13:52.600020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:55.930 [2024-11-20 07:13:53.223457] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:55.930 [2024-11-20 07:13:53.223854] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:56.188 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:56.188 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.188 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.188 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.188 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.188 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.188 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.188 07:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.188 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.188 07:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:56.188 07:13:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.188 116.40 IOPS, 349.20 MiB/s [2024-11-20T07:13:53.508Z] 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.188 "name": "raid_bdev1", 00:16:56.188 "uuid": "cbc4aa51-347d-4f03-8f34-796d8546a21c", 00:16:56.188 "strip_size_kb": 0, 00:16:56.188 "state": "online", 00:16:56.188 "raid_level": "raid1", 00:16:56.188 "superblock": false, 00:16:56.188 "num_base_bdevs": 2, 00:16:56.188 "num_base_bdevs_discovered": 2, 00:16:56.188 "num_base_bdevs_operational": 2, 00:16:56.188 "process": { 00:16:56.188 "type": "rebuild", 00:16:56.188 "target": "spare", 00:16:56.188 "progress": { 00:16:56.188 "blocks": 28672, 00:16:56.188 "percent": 43 00:16:56.188 } 00:16:56.188 }, 00:16:56.188 "base_bdevs_list": [ 00:16:56.188 { 00:16:56.188 "name": "spare", 00:16:56.188 "uuid": "9f291e06-afc4-5963-8d95-c2206b6440a5", 00:16:56.188 "is_configured": true, 00:16:56.188 "data_offset": 0, 00:16:56.188 "data_size": 65536 00:16:56.188 }, 00:16:56.188 { 00:16:56.188 "name": "BaseBdev2", 00:16:56.188 "uuid": "a4519464-bcf1-55b0-8f5b-4b49ca4731e8", 00:16:56.188 "is_configured": true, 00:16:56.188 "data_offset": 0, 00:16:56.188 "data_size": 65536 00:16:56.188 } 00:16:56.188 ] 00:16:56.188 }' 00:16:56.188 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.446 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:56.446 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.446 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:56.446 07:13:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:56.446 [2024-11-20 07:13:53.707751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:57.383 103.00 IOPS, 309.00 MiB/s [2024-11-20T07:13:54.703Z] [2024-11-20 07:13:54.481664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:57.383 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.383 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.383 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.383 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.383 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.383 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.383 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.383 07:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.383 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.383 07:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:57.383 07:13:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.383 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.383 "name": "raid_bdev1", 00:16:57.383 "uuid": "cbc4aa51-347d-4f03-8f34-796d8546a21c", 00:16:57.383 "strip_size_kb": 0, 00:16:57.383 "state": "online", 00:16:57.383 "raid_level": "raid1", 00:16:57.383 "superblock": false, 00:16:57.383 "num_base_bdevs": 2, 00:16:57.383 "num_base_bdevs_discovered": 2, 00:16:57.383 "num_base_bdevs_operational": 2, 00:16:57.383 "process": { 00:16:57.383 "type": "rebuild", 00:16:57.383 "target": "spare", 00:16:57.383 "progress": { 00:16:57.383 "blocks": 47104, 00:16:57.383 "percent": 71 00:16:57.383 } 00:16:57.383 }, 00:16:57.383 "base_bdevs_list": [ 00:16:57.383 { 00:16:57.383 "name": "spare", 00:16:57.383 "uuid": "9f291e06-afc4-5963-8d95-c2206b6440a5", 00:16:57.383 "is_configured": true, 00:16:57.383 "data_offset": 0, 00:16:57.383 "data_size": 65536 00:16:57.383 }, 00:16:57.383 { 00:16:57.383 "name": "BaseBdev2", 00:16:57.383 "uuid": "a4519464-bcf1-55b0-8f5b-4b49ca4731e8", 00:16:57.383 "is_configured": true, 00:16:57.383 "data_offset": 0, 00:16:57.383 "data_size": 65536 00:16:57.383 } 00:16:57.383 ] 00:16:57.383 }' 00:16:57.384 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.384 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.384 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.642 [2024-11-20 07:13:54.704464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:57.642 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.642 07:13:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:57.642 [2024-11-20 07:13:54.923939] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:58.535 93.86 IOPS, 281.57 MiB/s [2024-11-20T07:13:55.855Z] [2024-11-20 07:13:55.698487] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:58.535 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.535 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.535 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.535 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.535 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.535 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.535 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.535 07:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.535 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.535 07:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:58.535 07:13:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.535 [2024-11-20 07:13:55.806677] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:58.535 [2024-11-20 07:13:55.809390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.535 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.536 "name": "raid_bdev1", 00:16:58.536 "uuid": "cbc4aa51-347d-4f03-8f34-796d8546a21c", 00:16:58.536 "strip_size_kb": 0, 00:16:58.536 "state": "online", 00:16:58.536 "raid_level": "raid1", 00:16:58.536 "superblock": false, 00:16:58.536 "num_base_bdevs": 2, 00:16:58.536 "num_base_bdevs_discovered": 2, 00:16:58.536 "num_base_bdevs_operational": 2, 00:16:58.536 "process": { 00:16:58.536 "type": "rebuild", 00:16:58.536 "target": "spare", 00:16:58.536 "progress": { 00:16:58.536 "blocks": 65536, 00:16:58.536 "percent": 100 00:16:58.536 } 00:16:58.536 }, 00:16:58.536 "base_bdevs_list": [ 00:16:58.536 { 00:16:58.536 "name": "spare", 00:16:58.536 "uuid": "9f291e06-afc4-5963-8d95-c2206b6440a5", 00:16:58.536 "is_configured": true, 00:16:58.536 "data_offset": 0, 00:16:58.536 "data_size": 65536 00:16:58.536 }, 00:16:58.536 { 00:16:58.536 "name": "BaseBdev2", 00:16:58.536 "uuid": "a4519464-bcf1-55b0-8f5b-4b49ca4731e8", 00:16:58.536 "is_configured": true, 00:16:58.536 "data_offset": 0, 00:16:58.536 "data_size": 65536 00:16:58.536 } 00:16:58.536 ] 00:16:58.536 }' 00:16:58.536 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.795 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.795 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.795 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.795 07:13:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.928 86.50 IOPS, 259.50 MiB/s [2024-11-20T07:13:57.248Z] 07:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.928 07:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.928 07:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.928 07:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.928 07:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.928 07:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.928 07:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.928 07:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.928 07:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.928 07:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.928 07:13:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.928 07:13:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.928 "name": "raid_bdev1", 00:16:59.928 "uuid": "cbc4aa51-347d-4f03-8f34-796d8546a21c", 00:16:59.928 "strip_size_kb": 0, 00:16:59.928 "state": "online", 00:16:59.928 "raid_level": "raid1", 00:16:59.928 "superblock": false, 00:16:59.928 "num_base_bdevs": 2, 00:16:59.928 "num_base_bdevs_discovered": 2, 00:16:59.928 "num_base_bdevs_operational": 2, 00:16:59.928 "base_bdevs_list": [ 00:16:59.928 { 00:16:59.928 "name": "spare", 00:16:59.928 "uuid": "9f291e06-afc4-5963-8d95-c2206b6440a5", 00:16:59.928 "is_configured": true, 00:16:59.928 "data_offset": 0, 00:16:59.928 "data_size": 65536 00:16:59.928 }, 00:16:59.928 { 00:16:59.928 "name": "BaseBdev2", 00:16:59.928 "uuid": "a4519464-bcf1-55b0-8f5b-4b49ca4731e8", 00:16:59.928 "is_configured": true, 00:16:59.928 "data_offset": 0, 00:16:59.928 "data_size": 65536 00:16:59.928 } 00:16:59.928 ] 00:16:59.928 }' 00:16:59.928 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.928 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:59.928 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.929 "name": "raid_bdev1", 00:16:59.929 "uuid": "cbc4aa51-347d-4f03-8f34-796d8546a21c", 00:16:59.929 "strip_size_kb": 0, 00:16:59.929 "state": "online", 00:16:59.929 "raid_level": "raid1", 00:16:59.929 "superblock": false, 00:16:59.929 "num_base_bdevs": 2, 00:16:59.929 "num_base_bdevs_discovered": 2, 00:16:59.929 "num_base_bdevs_operational": 2, 00:16:59.929 "base_bdevs_list": [ 00:16:59.929 { 00:16:59.929 "name": "spare", 00:16:59.929 "uuid": "9f291e06-afc4-5963-8d95-c2206b6440a5", 00:16:59.929 "is_configured": true, 00:16:59.929 "data_offset": 0, 00:16:59.929 "data_size": 65536 00:16:59.929 }, 00:16:59.929 { 00:16:59.929 "name": "BaseBdev2", 00:16:59.929 "uuid": "a4519464-bcf1-55b0-8f5b-4b49ca4731e8", 00:16:59.929 "is_configured": true, 00:16:59.929 "data_offset": 0, 00:16:59.929 "data_size": 65536 00:16:59.929 } 00:16:59.929 ] 00:16:59.929 }' 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:59.929 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.187 07:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.187 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.187 "name": "raid_bdev1", 00:17:00.187 "uuid": "cbc4aa51-347d-4f03-8f34-796d8546a21c", 00:17:00.187 "strip_size_kb": 0, 00:17:00.187 "state": "online", 00:17:00.187 "raid_level": "raid1", 00:17:00.187 "superblock": false, 00:17:00.187 "num_base_bdevs": 2, 00:17:00.187 "num_base_bdevs_discovered": 2, 00:17:00.187 "num_base_bdevs_operational": 2, 00:17:00.187 "base_bdevs_list": [ 00:17:00.187 { 00:17:00.187 "name": "spare", 00:17:00.187 "uuid": "9f291e06-afc4-5963-8d95-c2206b6440a5", 00:17:00.187 "is_configured": true, 00:17:00.187 "data_offset": 0, 00:17:00.187 "data_size": 65536 00:17:00.187 }, 00:17:00.187 { 00:17:00.187 "name": "BaseBdev2", 00:17:00.187 "uuid": "a4519464-bcf1-55b0-8f5b-4b49ca4731e8", 00:17:00.187 "is_configured": true, 00:17:00.187 "data_offset": 0, 00:17:00.187 "data_size": 65536 00:17:00.187 } 00:17:00.187 ] 00:17:00.187 }' 00:17:00.187 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.187 07:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.445 81.67 IOPS, 245.00 MiB/s [2024-11-20T07:13:57.765Z] 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:00.445 07:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.445 07:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.445 [2024-11-20 07:13:57.720754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.445 [2024-11-20 07:13:57.720789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.704 00:17:00.704 Latency(us) 00:17:00.704 [2024-11-20T07:13:58.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.704 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:00.704 raid_bdev1 : 9.39 79.32 237.96 0.00 0.00 17225.79 314.65 117249.86 00:17:00.704 [2024-11-20T07:13:58.024Z] =================================================================================================================== 00:17:00.704 [2024-11-20T07:13:58.024Z] Total : 79.32 237.96 0.00 0.00 17225.79 314.65 117249.86 00:17:00.704 { 00:17:00.704 "results": [ 00:17:00.704 { 00:17:00.704 "job": "raid_bdev1", 00:17:00.704 "core_mask": "0x1", 00:17:00.704 "workload": "randrw", 00:17:00.704 "percentage": 50, 00:17:00.704 "status": "finished", 00:17:00.704 "queue_depth": 2, 00:17:00.704 "io_size": 3145728, 00:17:00.704 "runtime": 9.39217, 00:17:00.704 "iops": 79.3213921809337, 00:17:00.704 "mibps": 237.9641765428011, 00:17:00.704 "io_failed": 0, 00:17:00.704 "io_timeout": 0, 00:17:00.704 "avg_latency_us": 17225.790477120194, 00:17:00.704 "min_latency_us": 314.6472727272727, 00:17:00.704 "max_latency_us": 117249.86181818182 00:17:00.704 } 00:17:00.704 ], 00:17:00.704 "core_count": 1 00:17:00.704 } 00:17:00.704 [2024-11-20 07:13:57.837113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:00.704 [2024-11-20 07:13:57.837169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.704 [2024-11-20 07:13:57.837293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.704 [2024-11-20 07:13:57.837328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:00.704 07:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.704 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.704 07:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.704 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:00.704 07:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:00.704 07:13:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.704 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:00.705 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:00.705 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:00.705 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:00.705 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:00.705 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:00.705 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:00.705 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:00.705 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:00.705 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:00.705 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:00.705 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:00.705 07:13:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:00.965 /dev/nbd0 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:00.965 1+0 records in 00:17:00.965 1+0 records out 00:17:00.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000685107 s, 6.0 MB/s 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:00.965 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:17:01.232 /dev/nbd1 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:01.232 1+0 records in 00:17:01.232 1+0 records out 00:17:01.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328443 s, 12.5 MB/s 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:01.232 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:01.491 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:01.491 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:01.491 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:01.491 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:01.491 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:01.491 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:01.491 07:13:58 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:01.750 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:02.009 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:02.009 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:02.009 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:02.009 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:02.009 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:02.009 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:02.009 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:17:02.009 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:02.009 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:02.009 07:13:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76606 00:17:02.009 07:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76606 ']' 00:17:02.009 07:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76606 00:17:02.009 07:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:17:02.267 07:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.267 07:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76606 00:17:02.267 killing process with pid 76606 00:17:02.267 Received shutdown signal, test time was about 10.930548 seconds 00:17:02.267 00:17:02.267 Latency(us) 00:17:02.267 [2024-11-20T07:13:59.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.267 [2024-11-20T07:13:59.587Z] =================================================================================================================== 00:17:02.267 [2024-11-20T07:13:59.587Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:02.267 07:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:02.267 07:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:02.267 07:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76606' 00:17:02.267 07:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76606 00:17:02.267 [2024-11-20 07:13:59.355088] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.267 07:13:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76606 00:17:02.267 [2024-11-20 07:13:59.554833] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:03.644 00:17:03.644 real 0m14.317s 00:17:03.644 user 0m18.582s 00:17:03.644 sys 0m1.453s 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.644 ************************************ 00:17:03.644 END TEST raid_rebuild_test_io 00:17:03.644 ************************************ 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.644 07:14:00 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:17:03.644 07:14:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:03.644 07:14:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.644 07:14:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:03.644 ************************************ 00:17:03.644 START TEST raid_rebuild_test_sb_io 00:17:03.644 ************************************ 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77008 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77008 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77008 ']' 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:03.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.644 07:14:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:03.644 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:03.644 Zero copy mechanism will not be used. 00:17:03.644 [2024-11-20 07:14:00.850166] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:17:03.644 [2024-11-20 07:14:00.850379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77008 ] 00:17:03.903 [2024-11-20 07:14:01.030390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.904 [2024-11-20 07:14:01.165755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.162 [2024-11-20 07:14:01.377367] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.162 [2024-11-20 07:14:01.377426] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.787 BaseBdev1_malloc 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.787 [2024-11-20 07:14:01.830218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:04.787 [2024-11-20 07:14:01.830374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.787 [2024-11-20 07:14:01.830424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:04.787 [2024-11-20 07:14:01.830442] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.787 [2024-11-20 07:14:01.833592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.787 [2024-11-20 07:14:01.833659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:04.787 BaseBdev1 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.787 BaseBdev2_malloc 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.787 [2024-11-20 07:14:01.887829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:04.787 [2024-11-20 07:14:01.887973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.787 [2024-11-20 07:14:01.888042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:04.787 [2024-11-20 07:14:01.888081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.787 [2024-11-20 07:14:01.891126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.787 [2024-11-20 07:14:01.891382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:04.787 BaseBdev2 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.787 spare_malloc 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.787 spare_delay 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.787 [2024-11-20 07:14:01.965602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:04.787 [2024-11-20 07:14:01.965687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.787 [2024-11-20 07:14:01.965721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:04.787 [2024-11-20 07:14:01.965740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.787 [2024-11-20 07:14:01.968757] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.787 [2024-11-20 07:14:01.969016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:04.787 spare 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.787 [2024-11-20 07:14:01.977814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:04.787 [2024-11-20 07:14:01.980419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:04.787 [2024-11-20 07:14:01.980812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:04.787 [2024-11-20 07:14:01.980847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:04.787 [2024-11-20 07:14:01.981248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:04.787 [2024-11-20 07:14:01.981482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:04.787 [2024-11-20 07:14:01.981499] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:04.787 [2024-11-20 07:14:01.981772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.787 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:04.788 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.788 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.788 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.788 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.788 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.788 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.788 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.788 07:14:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:04.788 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.788 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.788 "name": "raid_bdev1", 00:17:04.788 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:04.788 "strip_size_kb": 0, 00:17:04.788 "state": "online", 00:17:04.788 "raid_level": "raid1", 00:17:04.788 "superblock": true, 00:17:04.788 "num_base_bdevs": 2, 00:17:04.788 "num_base_bdevs_discovered": 2, 00:17:04.788 "num_base_bdevs_operational": 2, 00:17:04.788 "base_bdevs_list": [ 00:17:04.788 { 00:17:04.788 "name": "BaseBdev1", 00:17:04.788 "uuid": "d2e68f8f-d391-5677-865a-46bb5454e665", 00:17:04.788 "is_configured": true, 00:17:04.788 "data_offset": 2048, 00:17:04.788 "data_size": 63488 00:17:04.788 }, 00:17:04.788 { 00:17:04.788 "name": "BaseBdev2", 00:17:04.788 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:04.788 "is_configured": true, 00:17:04.788 "data_offset": 2048, 00:17:04.788 "data_size": 63488 00:17:04.788 } 00:17:04.788 ] 00:17:04.788 }' 00:17:04.788 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.788 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.367 [2024-11-20 07:14:02.554711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.367 [2024-11-20 07:14:02.662053] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.367 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.626 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.626 "name": "raid_bdev1", 00:17:05.626 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:05.626 "strip_size_kb": 0, 00:17:05.626 "state": "online", 00:17:05.626 "raid_level": "raid1", 00:17:05.626 "superblock": true, 00:17:05.626 "num_base_bdevs": 2, 00:17:05.626 "num_base_bdevs_discovered": 1, 00:17:05.626 "num_base_bdevs_operational": 1, 00:17:05.626 "base_bdevs_list": [ 00:17:05.626 { 00:17:05.626 "name": null, 00:17:05.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.626 "is_configured": false, 00:17:05.626 "data_offset": 0, 00:17:05.626 "data_size": 63488 00:17:05.626 }, 00:17:05.626 { 00:17:05.626 "name": "BaseBdev2", 00:17:05.626 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:05.626 "is_configured": true, 00:17:05.626 "data_offset": 2048, 00:17:05.626 "data_size": 63488 00:17:05.626 } 00:17:05.626 ] 00:17:05.626 }' 00:17:05.626 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.626 07:14:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:05.626 [2024-11-20 07:14:02.794237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:05.626 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:05.626 Zero copy mechanism will not be used. 00:17:05.626 Running I/O for 60 seconds... 00:17:06.193 07:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:06.193 07:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.193 07:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:06.193 [2024-11-20 07:14:03.231118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.194 07:14:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.194 07:14:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:06.194 [2024-11-20 07:14:03.292251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:06.194 [2024-11-20 07:14:03.294985] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:06.194 [2024-11-20 07:14:03.417380] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:06.452 [2024-11-20 07:14:03.653569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:06.452 [2024-11-20 07:14:03.654171] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:06.710 167.00 IOPS, 501.00 MiB/s [2024-11-20T07:14:04.030Z] [2024-11-20 07:14:03.994180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:06.969 [2024-11-20 07:14:04.207128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:06.969 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.969 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.969 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.969 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.969 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.969 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.969 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.969 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.969 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.228 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.228 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.228 "name": "raid_bdev1", 00:17:07.228 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:07.228 "strip_size_kb": 0, 00:17:07.228 "state": "online", 00:17:07.228 "raid_level": "raid1", 00:17:07.228 "superblock": true, 00:17:07.228 "num_base_bdevs": 2, 00:17:07.228 "num_base_bdevs_discovered": 2, 00:17:07.228 "num_base_bdevs_operational": 2, 00:17:07.228 "process": { 00:17:07.228 "type": "rebuild", 00:17:07.228 "target": "spare", 00:17:07.228 "progress": { 00:17:07.228 "blocks": 10240, 00:17:07.228 "percent": 16 00:17:07.228 } 00:17:07.228 }, 00:17:07.228 "base_bdevs_list": [ 00:17:07.228 { 00:17:07.228 "name": "spare", 00:17:07.228 "uuid": "1aad2e4b-6140-5dfd-80f1-9811bf44b805", 00:17:07.228 "is_configured": true, 00:17:07.228 "data_offset": 2048, 00:17:07.228 "data_size": 63488 00:17:07.228 }, 00:17:07.228 { 00:17:07.228 "name": "BaseBdev2", 00:17:07.228 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:07.228 "is_configured": true, 00:17:07.228 "data_offset": 2048, 00:17:07.228 "data_size": 63488 00:17:07.228 } 00:17:07.228 ] 00:17:07.228 }' 00:17:07.228 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.228 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.228 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.228 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.228 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:07.228 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.228 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.228 [2024-11-20 07:14:04.431385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.228 [2024-11-20 07:14:04.521746] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:07.228 [2024-11-20 07:14:04.532407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.228 [2024-11-20 07:14:04.532458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.228 [2024-11-20 07:14:04.532473] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:07.486 [2024-11-20 07:14:04.568863] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.486 "name": "raid_bdev1", 00:17:07.486 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:07.486 "strip_size_kb": 0, 00:17:07.486 "state": "online", 00:17:07.486 "raid_level": "raid1", 00:17:07.486 "superblock": true, 00:17:07.486 "num_base_bdevs": 2, 00:17:07.486 "num_base_bdevs_discovered": 1, 00:17:07.486 "num_base_bdevs_operational": 1, 00:17:07.486 "base_bdevs_list": [ 00:17:07.486 { 00:17:07.486 "name": null, 00:17:07.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.486 "is_configured": false, 00:17:07.486 "data_offset": 0, 00:17:07.486 "data_size": 63488 00:17:07.486 }, 00:17:07.486 { 00:17:07.486 "name": "BaseBdev2", 00:17:07.486 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:07.486 "is_configured": true, 00:17:07.486 "data_offset": 2048, 00:17:07.486 "data_size": 63488 00:17:07.486 } 00:17:07.486 ] 00:17:07.486 }' 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.486 07:14:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.002 144.00 IOPS, 432.00 MiB/s [2024-11-20T07:14:05.322Z] 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.002 "name": "raid_bdev1", 00:17:08.002 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:08.002 "strip_size_kb": 0, 00:17:08.002 "state": "online", 00:17:08.002 "raid_level": "raid1", 00:17:08.002 "superblock": true, 00:17:08.002 "num_base_bdevs": 2, 00:17:08.002 "num_base_bdevs_discovered": 1, 00:17:08.002 "num_base_bdevs_operational": 1, 00:17:08.002 "base_bdevs_list": [ 00:17:08.002 { 00:17:08.002 "name": null, 00:17:08.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.002 "is_configured": false, 00:17:08.002 "data_offset": 0, 00:17:08.002 "data_size": 63488 00:17:08.002 }, 00:17:08.002 { 00:17:08.002 "name": "BaseBdev2", 00:17:08.002 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:08.002 "is_configured": true, 00:17:08.002 "data_offset": 2048, 00:17:08.002 "data_size": 63488 00:17:08.002 } 00:17:08.002 ] 00:17:08.002 }' 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:08.002 [2024-11-20 07:14:05.271578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.002 07:14:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:08.261 [2024-11-20 07:14:05.340434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:08.261 [2024-11-20 07:14:05.342987] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:08.261 [2024-11-20 07:14:05.453149] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:08.261 [2024-11-20 07:14:05.454026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:17:08.519 [2024-11-20 07:14:05.681984] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:08.519 [2024-11-20 07:14:05.682638] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:17:08.776 162.33 IOPS, 487.00 MiB/s [2024-11-20T07:14:06.096Z] [2024-11-20 07:14:06.052733] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:17:09.034 [2024-11-20 07:14:06.281685] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:09.034 [2024-11-20 07:14:06.282230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:17:09.034 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.034 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.034 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.034 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.034 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.034 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.034 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.034 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.034 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.034 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.292 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.292 "name": "raid_bdev1", 00:17:09.292 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:09.292 "strip_size_kb": 0, 00:17:09.292 "state": "online", 00:17:09.292 "raid_level": "raid1", 00:17:09.292 "superblock": true, 00:17:09.292 "num_base_bdevs": 2, 00:17:09.292 "num_base_bdevs_discovered": 2, 00:17:09.292 "num_base_bdevs_operational": 2, 00:17:09.292 "process": { 00:17:09.293 "type": "rebuild", 00:17:09.293 "target": "spare", 00:17:09.293 "progress": { 00:17:09.293 "blocks": 10240, 00:17:09.293 "percent": 16 00:17:09.293 } 00:17:09.293 }, 00:17:09.293 "base_bdevs_list": [ 00:17:09.293 { 00:17:09.293 "name": "spare", 00:17:09.293 "uuid": "1aad2e4b-6140-5dfd-80f1-9811bf44b805", 00:17:09.293 "is_configured": true, 00:17:09.293 "data_offset": 2048, 00:17:09.293 "data_size": 63488 00:17:09.293 }, 00:17:09.293 { 00:17:09.293 "name": "BaseBdev2", 00:17:09.293 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:09.293 "is_configured": true, 00:17:09.293 "data_offset": 2048, 00:17:09.293 "data_size": 63488 00:17:09.293 } 00:17:09.293 ] 00:17:09.293 }' 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:09.293 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=451 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.293 "name": "raid_bdev1", 00:17:09.293 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:09.293 "strip_size_kb": 0, 00:17:09.293 "state": "online", 00:17:09.293 "raid_level": "raid1", 00:17:09.293 "superblock": true, 00:17:09.293 "num_base_bdevs": 2, 00:17:09.293 "num_base_bdevs_discovered": 2, 00:17:09.293 "num_base_bdevs_operational": 2, 00:17:09.293 "process": { 00:17:09.293 "type": "rebuild", 00:17:09.293 "target": "spare", 00:17:09.293 "progress": { 00:17:09.293 "blocks": 12288, 00:17:09.293 "percent": 19 00:17:09.293 } 00:17:09.293 }, 00:17:09.293 "base_bdevs_list": [ 00:17:09.293 { 00:17:09.293 "name": "spare", 00:17:09.293 "uuid": "1aad2e4b-6140-5dfd-80f1-9811bf44b805", 00:17:09.293 "is_configured": true, 00:17:09.293 "data_offset": 2048, 00:17:09.293 "data_size": 63488 00:17:09.293 }, 00:17:09.293 { 00:17:09.293 "name": "BaseBdev2", 00:17:09.293 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:09.293 "is_configured": true, 00:17:09.293 "data_offset": 2048, 00:17:09.293 "data_size": 63488 00:17:09.293 } 00:17:09.293 ] 00:17:09.293 }' 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:09.293 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.551 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.551 07:14:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:09.809 144.25 IOPS, 432.75 MiB/s [2024-11-20T07:14:07.129Z] [2024-11-20 07:14:06.893674] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:17:09.809 [2024-11-20 07:14:07.123791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:17:10.378 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:10.378 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.378 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.378 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.378 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.378 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.378 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.378 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.378 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.378 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:10.378 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.378 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.378 "name": "raid_bdev1", 00:17:10.378 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:10.378 "strip_size_kb": 0, 00:17:10.378 "state": "online", 00:17:10.378 "raid_level": "raid1", 00:17:10.378 "superblock": true, 00:17:10.378 "num_base_bdevs": 2, 00:17:10.378 "num_base_bdevs_discovered": 2, 00:17:10.378 "num_base_bdevs_operational": 2, 00:17:10.378 "process": { 00:17:10.378 "type": "rebuild", 00:17:10.378 "target": "spare", 00:17:10.378 "progress": { 00:17:10.378 "blocks": 30720, 00:17:10.378 "percent": 48 00:17:10.378 } 00:17:10.378 }, 00:17:10.378 "base_bdevs_list": [ 00:17:10.378 { 00:17:10.378 "name": "spare", 00:17:10.378 "uuid": "1aad2e4b-6140-5dfd-80f1-9811bf44b805", 00:17:10.378 "is_configured": true, 00:17:10.378 "data_offset": 2048, 00:17:10.378 "data_size": 63488 00:17:10.378 }, 00:17:10.378 { 00:17:10.378 "name": "BaseBdev2", 00:17:10.378 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:10.378 "is_configured": true, 00:17:10.379 "data_offset": 2048, 00:17:10.379 "data_size": 63488 00:17:10.379 } 00:17:10.379 ] 00:17:10.379 }' 00:17:10.379 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.637 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.637 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.637 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.637 07:14:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:10.637 [2024-11-20 07:14:07.791911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:10.637 [2024-11-20 07:14:07.792232] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:17:11.575 125.60 IOPS, 376.80 MiB/s [2024-11-20T07:14:08.895Z] [2024-11-20 07:14:08.659176] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.575 112.33 IOPS, 337.00 MiB/s [2024-11-20T07:14:08.895Z] 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.575 "name": "raid_bdev1", 00:17:11.575 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:11.575 "strip_size_kb": 0, 00:17:11.575 "state": "online", 00:17:11.575 "raid_level": "raid1", 00:17:11.575 "superblock": true, 00:17:11.575 "num_base_bdevs": 2, 00:17:11.575 "num_base_bdevs_discovered": 2, 00:17:11.575 "num_base_bdevs_operational": 2, 00:17:11.575 "process": { 00:17:11.575 "type": "rebuild", 00:17:11.575 "target": "spare", 00:17:11.575 "progress": { 00:17:11.575 "blocks": 53248, 00:17:11.575 "percent": 83 00:17:11.575 } 00:17:11.575 }, 00:17:11.575 "base_bdevs_list": [ 00:17:11.575 { 00:17:11.575 "name": "spare", 00:17:11.575 "uuid": "1aad2e4b-6140-5dfd-80f1-9811bf44b805", 00:17:11.575 "is_configured": true, 00:17:11.575 "data_offset": 2048, 00:17:11.575 "data_size": 63488 00:17:11.575 }, 00:17:11.575 { 00:17:11.575 "name": "BaseBdev2", 00:17:11.575 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:11.575 "is_configured": true, 00:17:11.575 "data_offset": 2048, 00:17:11.575 "data_size": 63488 00:17:11.575 } 00:17:11.575 ] 00:17:11.575 }' 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:11.575 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.839 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:11.839 07:14:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:12.097 [2024-11-20 07:14:09.287709] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:12.097 [2024-11-20 07:14:09.387801] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:12.097 [2024-11-20 07:14:09.398756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.664 100.71 IOPS, 302.14 MiB/s [2024-11-20T07:14:09.984Z] 07:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:12.664 07:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.664 07:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.664 07:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.664 07:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.664 07:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.664 07:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.665 07:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.665 07:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.665 07:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.665 07:14:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.924 07:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.924 "name": "raid_bdev1", 00:17:12.924 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:12.924 "strip_size_kb": 0, 00:17:12.924 "state": "online", 00:17:12.924 "raid_level": "raid1", 00:17:12.924 "superblock": true, 00:17:12.924 "num_base_bdevs": 2, 00:17:12.924 "num_base_bdevs_discovered": 2, 00:17:12.924 "num_base_bdevs_operational": 2, 00:17:12.924 "base_bdevs_list": [ 00:17:12.924 { 00:17:12.924 "name": "spare", 00:17:12.924 "uuid": "1aad2e4b-6140-5dfd-80f1-9811bf44b805", 00:17:12.924 "is_configured": true, 00:17:12.924 "data_offset": 2048, 00:17:12.924 "data_size": 63488 00:17:12.924 }, 00:17:12.924 { 00:17:12.924 "name": "BaseBdev2", 00:17:12.924 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:12.924 "is_configured": true, 00:17:12.924 "data_offset": 2048, 00:17:12.924 "data_size": 63488 00:17:12.924 } 00:17:12.924 ] 00:17:12.924 }' 00:17:12.924 07:14:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.924 "name": "raid_bdev1", 00:17:12.924 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:12.924 "strip_size_kb": 0, 00:17:12.924 "state": "online", 00:17:12.924 "raid_level": "raid1", 00:17:12.924 "superblock": true, 00:17:12.924 "num_base_bdevs": 2, 00:17:12.924 "num_base_bdevs_discovered": 2, 00:17:12.924 "num_base_bdevs_operational": 2, 00:17:12.924 "base_bdevs_list": [ 00:17:12.924 { 00:17:12.924 "name": "spare", 00:17:12.924 "uuid": "1aad2e4b-6140-5dfd-80f1-9811bf44b805", 00:17:12.924 "is_configured": true, 00:17:12.924 "data_offset": 2048, 00:17:12.924 "data_size": 63488 00:17:12.924 }, 00:17:12.924 { 00:17:12.924 "name": "BaseBdev2", 00:17:12.924 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:12.924 "is_configured": true, 00:17:12.924 "data_offset": 2048, 00:17:12.924 "data_size": 63488 00:17:12.924 } 00:17:12.924 ] 00:17:12.924 }' 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.924 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.184 "name": "raid_bdev1", 00:17:13.184 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:13.184 "strip_size_kb": 0, 00:17:13.184 "state": "online", 00:17:13.184 "raid_level": "raid1", 00:17:13.184 "superblock": true, 00:17:13.184 "num_base_bdevs": 2, 00:17:13.184 "num_base_bdevs_discovered": 2, 00:17:13.184 "num_base_bdevs_operational": 2, 00:17:13.184 "base_bdevs_list": [ 00:17:13.184 { 00:17:13.184 "name": "spare", 00:17:13.184 "uuid": "1aad2e4b-6140-5dfd-80f1-9811bf44b805", 00:17:13.184 "is_configured": true, 00:17:13.184 "data_offset": 2048, 00:17:13.184 "data_size": 63488 00:17:13.184 }, 00:17:13.184 { 00:17:13.184 "name": "BaseBdev2", 00:17:13.184 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:13.184 "is_configured": true, 00:17:13.184 "data_offset": 2048, 00:17:13.184 "data_size": 63488 00:17:13.184 } 00:17:13.184 ] 00:17:13.184 }' 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.184 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.752 [2024-11-20 07:14:10.772651] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.752 [2024-11-20 07:14:10.772820] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.752 93.25 IOPS, 279.75 MiB/s 00:17:13.752 Latency(us) 00:17:13.752 [2024-11-20T07:14:11.072Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.752 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:17:13.752 raid_bdev1 : 8.04 92.92 278.75 0.00 0.00 15507.46 277.41 119632.99 00:17:13.752 [2024-11-20T07:14:11.072Z] =================================================================================================================== 00:17:13.752 [2024-11-20T07:14:11.072Z] Total : 92.92 278.75 0.00 0.00 15507.46 277.41 119632.99 00:17:13.752 [2024-11-20 07:14:10.856825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.752 [2024-11-20 07:14:10.857099] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.752 [2024-11-20 07:14:10.857358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.752 { 00:17:13.752 "results": [ 00:17:13.752 { 00:17:13.752 "job": "raid_bdev1", 00:17:13.752 "core_mask": "0x1", 00:17:13.752 "workload": "randrw", 00:17:13.752 "percentage": 50, 00:17:13.752 "status": "finished", 00:17:13.752 "queue_depth": 2, 00:17:13.752 "io_size": 3145728, 00:17:13.752 "runtime": 8.039596, 00:17:13.752 "iops": 92.91511663023863, 00:17:13.752 "mibps": 278.7453498907159, 00:17:13.752 "io_failed": 0, 00:17:13.752 "io_timeout": 0, 00:17:13.752 "avg_latency_us": 15507.45998783011, 00:17:13.752 "min_latency_us": 277.4109090909091, 00:17:13.752 "max_latency_us": 119632.98909090909 00:17:13.752 } 00:17:13.752 ], 00:17:13.752 "core_count": 1 00:17:13.752 } 00:17:13.752 [2024-11-20 07:14:10.857665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:13.752 07:14:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:17:14.011 /dev/nbd0 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:14.011 1+0 records in 00:17:14.011 1+0 records out 00:17:14.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318055 s, 12.9 MB/s 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:14.011 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:17:14.270 /dev/nbd1 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:14.270 1+0 records in 00:17:14.270 1+0 records out 00:17:14.270 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369267 s, 11.1 MB/s 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:14.270 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:14.529 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:17:14.529 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:14.529 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:17:14.529 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:14.529 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:14.529 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:14.529 07:14:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:14.789 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.048 [2024-11-20 07:14:12.340695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:15.048 [2024-11-20 07:14:12.340767] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.048 [2024-11-20 07:14:12.340798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:15.048 [2024-11-20 07:14:12.340819] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.048 [2024-11-20 07:14:12.343754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.048 [2024-11-20 07:14:12.343806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:15.048 [2024-11-20 07:14:12.343951] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:15.048 [2024-11-20 07:14:12.344030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.048 [2024-11-20 07:14:12.344199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.048 spare 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.048 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.307 [2024-11-20 07:14:12.444370] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:15.307 [2024-11-20 07:14:12.444433] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:15.307 [2024-11-20 07:14:12.444859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:17:15.307 [2024-11-20 07:14:12.445406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:15.307 [2024-11-20 07:14:12.445594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:15.307 [2024-11-20 07:14:12.445991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.307 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.307 "name": "raid_bdev1", 00:17:15.307 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:15.307 "strip_size_kb": 0, 00:17:15.308 "state": "online", 00:17:15.308 "raid_level": "raid1", 00:17:15.308 "superblock": true, 00:17:15.308 "num_base_bdevs": 2, 00:17:15.308 "num_base_bdevs_discovered": 2, 00:17:15.308 "num_base_bdevs_operational": 2, 00:17:15.308 "base_bdevs_list": [ 00:17:15.308 { 00:17:15.308 "name": "spare", 00:17:15.308 "uuid": "1aad2e4b-6140-5dfd-80f1-9811bf44b805", 00:17:15.308 "is_configured": true, 00:17:15.308 "data_offset": 2048, 00:17:15.308 "data_size": 63488 00:17:15.308 }, 00:17:15.308 { 00:17:15.308 "name": "BaseBdev2", 00:17:15.308 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:15.308 "is_configured": true, 00:17:15.308 "data_offset": 2048, 00:17:15.308 "data_size": 63488 00:17:15.308 } 00:17:15.308 ] 00:17:15.308 }' 00:17:15.308 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.308 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.640 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:15.640 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.640 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:15.640 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:15.640 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.640 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.640 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.640 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.900 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.900 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.900 07:14:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.900 "name": "raid_bdev1", 00:17:15.900 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:15.900 "strip_size_kb": 0, 00:17:15.900 "state": "online", 00:17:15.900 "raid_level": "raid1", 00:17:15.900 "superblock": true, 00:17:15.900 "num_base_bdevs": 2, 00:17:15.900 "num_base_bdevs_discovered": 2, 00:17:15.900 "num_base_bdevs_operational": 2, 00:17:15.900 "base_bdevs_list": [ 00:17:15.900 { 00:17:15.900 "name": "spare", 00:17:15.900 "uuid": "1aad2e4b-6140-5dfd-80f1-9811bf44b805", 00:17:15.900 "is_configured": true, 00:17:15.900 "data_offset": 2048, 00:17:15.900 "data_size": 63488 00:17:15.900 }, 00:17:15.900 { 00:17:15.900 "name": "BaseBdev2", 00:17:15.900 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:15.900 "is_configured": true, 00:17:15.900 "data_offset": 2048, 00:17:15.900 "data_size": 63488 00:17:15.900 } 00:17:15.900 ] 00:17:15.900 }' 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.900 [2024-11-20 07:14:13.154213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.900 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.159 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.159 "name": "raid_bdev1", 00:17:16.159 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:16.159 "strip_size_kb": 0, 00:17:16.159 "state": "online", 00:17:16.159 "raid_level": "raid1", 00:17:16.159 "superblock": true, 00:17:16.159 "num_base_bdevs": 2, 00:17:16.159 "num_base_bdevs_discovered": 1, 00:17:16.159 "num_base_bdevs_operational": 1, 00:17:16.159 "base_bdevs_list": [ 00:17:16.159 { 00:17:16.159 "name": null, 00:17:16.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.159 "is_configured": false, 00:17:16.159 "data_offset": 0, 00:17:16.159 "data_size": 63488 00:17:16.159 }, 00:17:16.159 { 00:17:16.159 "name": "BaseBdev2", 00:17:16.159 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:16.159 "is_configured": true, 00:17:16.159 "data_offset": 2048, 00:17:16.159 "data_size": 63488 00:17:16.159 } 00:17:16.159 ] 00:17:16.159 }' 00:17:16.159 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.159 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.418 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:16.418 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.419 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:16.419 [2024-11-20 07:14:13.658462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.419 [2024-11-20 07:14:13.659810] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:16.419 [2024-11-20 07:14:13.659840] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:16.419 [2024-11-20 07:14:13.659911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:16.419 [2024-11-20 07:14:13.675953] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:17:16.419 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.419 07:14:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:16.419 [2024-11-20 07:14:13.678402] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:17.796 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.796 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.796 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.796 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.796 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.796 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.796 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.796 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.796 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.796 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.796 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:17.796 "name": "raid_bdev1", 00:17:17.796 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:17.796 "strip_size_kb": 0, 00:17:17.796 "state": "online", 00:17:17.796 "raid_level": "raid1", 00:17:17.796 "superblock": true, 00:17:17.796 "num_base_bdevs": 2, 00:17:17.796 "num_base_bdevs_discovered": 2, 00:17:17.796 "num_base_bdevs_operational": 2, 00:17:17.796 "process": { 00:17:17.796 "type": "rebuild", 00:17:17.796 "target": "spare", 00:17:17.796 "progress": { 00:17:17.796 "blocks": 20480, 00:17:17.796 "percent": 32 00:17:17.796 } 00:17:17.796 }, 00:17:17.796 "base_bdevs_list": [ 00:17:17.796 { 00:17:17.796 "name": "spare", 00:17:17.796 "uuid": "1aad2e4b-6140-5dfd-80f1-9811bf44b805", 00:17:17.796 "is_configured": true, 00:17:17.796 "data_offset": 2048, 00:17:17.796 "data_size": 63488 00:17:17.796 }, 00:17:17.796 { 00:17:17.796 "name": "BaseBdev2", 00:17:17.796 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:17.796 "is_configured": true, 00:17:17.796 "data_offset": 2048, 00:17:17.796 "data_size": 63488 00:17:17.796 } 00:17:17.796 ] 00:17:17.796 }' 00:17:17.796 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:17.796 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.797 [2024-11-20 07:14:14.860058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.797 [2024-11-20 07:14:14.887196] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:17.797 [2024-11-20 07:14:14.887269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:17.797 [2024-11-20 07:14:14.887297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:17.797 [2024-11-20 07:14:14.887308] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.797 "name": "raid_bdev1", 00:17:17.797 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:17.797 "strip_size_kb": 0, 00:17:17.797 "state": "online", 00:17:17.797 "raid_level": "raid1", 00:17:17.797 "superblock": true, 00:17:17.797 "num_base_bdevs": 2, 00:17:17.797 "num_base_bdevs_discovered": 1, 00:17:17.797 "num_base_bdevs_operational": 1, 00:17:17.797 "base_bdevs_list": [ 00:17:17.797 { 00:17:17.797 "name": null, 00:17:17.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.797 "is_configured": false, 00:17:17.797 "data_offset": 0, 00:17:17.797 "data_size": 63488 00:17:17.797 }, 00:17:17.797 { 00:17:17.797 "name": "BaseBdev2", 00:17:17.797 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:17.797 "is_configured": true, 00:17:17.797 "data_offset": 2048, 00:17:17.797 "data_size": 63488 00:17:17.797 } 00:17:17.797 ] 00:17:17.797 }' 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.797 07:14:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.364 07:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:18.364 07:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.364 07:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:18.364 [2024-11-20 07:14:15.434008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:18.364 [2024-11-20 07:14:15.434233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.364 [2024-11-20 07:14:15.434281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:18.364 [2024-11-20 07:14:15.434297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.364 [2024-11-20 07:14:15.434903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.364 [2024-11-20 07:14:15.434934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:18.364 [2024-11-20 07:14:15.435066] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:18.364 [2024-11-20 07:14:15.435085] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:18.364 [2024-11-20 07:14:15.435105] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:18.364 [2024-11-20 07:14:15.435147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:18.364 [2024-11-20 07:14:15.451112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:17:18.364 spare 00:17:18.364 07:14:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.364 07:14:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:18.364 [2024-11-20 07:14:15.453586] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.300 "name": "raid_bdev1", 00:17:19.300 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:19.300 "strip_size_kb": 0, 00:17:19.300 "state": "online", 00:17:19.300 "raid_level": "raid1", 00:17:19.300 "superblock": true, 00:17:19.300 "num_base_bdevs": 2, 00:17:19.300 "num_base_bdevs_discovered": 2, 00:17:19.300 "num_base_bdevs_operational": 2, 00:17:19.300 "process": { 00:17:19.300 "type": "rebuild", 00:17:19.300 "target": "spare", 00:17:19.300 "progress": { 00:17:19.300 "blocks": 20480, 00:17:19.300 "percent": 32 00:17:19.300 } 00:17:19.300 }, 00:17:19.300 "base_bdevs_list": [ 00:17:19.300 { 00:17:19.300 "name": "spare", 00:17:19.300 "uuid": "1aad2e4b-6140-5dfd-80f1-9811bf44b805", 00:17:19.300 "is_configured": true, 00:17:19.300 "data_offset": 2048, 00:17:19.300 "data_size": 63488 00:17:19.300 }, 00:17:19.300 { 00:17:19.300 "name": "BaseBdev2", 00:17:19.300 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:19.300 "is_configured": true, 00:17:19.300 "data_offset": 2048, 00:17:19.300 "data_size": 63488 00:17:19.300 } 00:17:19.300 ] 00:17:19.300 }' 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.300 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.559 [2024-11-20 07:14:16.619460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:19.559 [2024-11-20 07:14:16.662640] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:19.559 [2024-11-20 07:14:16.662902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.559 [2024-11-20 07:14:16.663035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:19.559 [2024-11-20 07:14:16.663092] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.559 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.559 "name": "raid_bdev1", 00:17:19.560 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:19.560 "strip_size_kb": 0, 00:17:19.560 "state": "online", 00:17:19.560 "raid_level": "raid1", 00:17:19.560 "superblock": true, 00:17:19.560 "num_base_bdevs": 2, 00:17:19.560 "num_base_bdevs_discovered": 1, 00:17:19.560 "num_base_bdevs_operational": 1, 00:17:19.560 "base_bdevs_list": [ 00:17:19.560 { 00:17:19.560 "name": null, 00:17:19.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.560 "is_configured": false, 00:17:19.560 "data_offset": 0, 00:17:19.560 "data_size": 63488 00:17:19.560 }, 00:17:19.560 { 00:17:19.560 "name": "BaseBdev2", 00:17:19.560 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:19.560 "is_configured": true, 00:17:19.560 "data_offset": 2048, 00:17:19.560 "data_size": 63488 00:17:19.560 } 00:17:19.560 ] 00:17:19.560 }' 00:17:19.560 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.560 07:14:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:20.128 "name": "raid_bdev1", 00:17:20.128 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:20.128 "strip_size_kb": 0, 00:17:20.128 "state": "online", 00:17:20.128 "raid_level": "raid1", 00:17:20.128 "superblock": true, 00:17:20.128 "num_base_bdevs": 2, 00:17:20.128 "num_base_bdevs_discovered": 1, 00:17:20.128 "num_base_bdevs_operational": 1, 00:17:20.128 "base_bdevs_list": [ 00:17:20.128 { 00:17:20.128 "name": null, 00:17:20.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.128 "is_configured": false, 00:17:20.128 "data_offset": 0, 00:17:20.128 "data_size": 63488 00:17:20.128 }, 00:17:20.128 { 00:17:20.128 "name": "BaseBdev2", 00:17:20.128 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:20.128 "is_configured": true, 00:17:20.128 "data_offset": 2048, 00:17:20.128 "data_size": 63488 00:17:20.128 } 00:17:20.128 ] 00:17:20.128 }' 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:20.128 [2024-11-20 07:14:17.390333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:20.128 [2024-11-20 07:14:17.390405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.128 [2024-11-20 07:14:17.390434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:20.128 [2024-11-20 07:14:17.390452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.128 [2024-11-20 07:14:17.391041] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.128 [2024-11-20 07:14:17.391081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:20.128 [2024-11-20 07:14:17.391176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:20.128 [2024-11-20 07:14:17.391204] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:20.128 [2024-11-20 07:14:17.391216] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:20.128 [2024-11-20 07:14:17.391232] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:20.128 BaseBdev1 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.128 07:14:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.507 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.507 "name": "raid_bdev1", 00:17:21.507 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:21.507 "strip_size_kb": 0, 00:17:21.507 "state": "online", 00:17:21.507 "raid_level": "raid1", 00:17:21.507 "superblock": true, 00:17:21.507 "num_base_bdevs": 2, 00:17:21.507 "num_base_bdevs_discovered": 1, 00:17:21.507 "num_base_bdevs_operational": 1, 00:17:21.508 "base_bdevs_list": [ 00:17:21.508 { 00:17:21.508 "name": null, 00:17:21.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.508 "is_configured": false, 00:17:21.508 "data_offset": 0, 00:17:21.508 "data_size": 63488 00:17:21.508 }, 00:17:21.508 { 00:17:21.508 "name": "BaseBdev2", 00:17:21.508 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:21.508 "is_configured": true, 00:17:21.508 "data_offset": 2048, 00:17:21.508 "data_size": 63488 00:17:21.508 } 00:17:21.508 ] 00:17:21.508 }' 00:17:21.508 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.508 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.766 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.766 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.766 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.766 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.766 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.766 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.766 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.766 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.766 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.766 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.766 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.766 "name": "raid_bdev1", 00:17:21.766 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:21.766 "strip_size_kb": 0, 00:17:21.766 "state": "online", 00:17:21.766 "raid_level": "raid1", 00:17:21.766 "superblock": true, 00:17:21.766 "num_base_bdevs": 2, 00:17:21.766 "num_base_bdevs_discovered": 1, 00:17:21.766 "num_base_bdevs_operational": 1, 00:17:21.766 "base_bdevs_list": [ 00:17:21.766 { 00:17:21.767 "name": null, 00:17:21.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.767 "is_configured": false, 00:17:21.767 "data_offset": 0, 00:17:21.767 "data_size": 63488 00:17:21.767 }, 00:17:21.767 { 00:17:21.767 "name": "BaseBdev2", 00:17:21.767 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:21.767 "is_configured": true, 00:17:21.767 "data_offset": 2048, 00:17:21.767 "data_size": 63488 00:17:21.767 } 00:17:21.767 ] 00:17:21.767 }' 00:17:21.767 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.767 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.767 07:14:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:21.767 [2024-11-20 07:14:19.039171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.767 [2024-11-20 07:14:19.039367] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:21.767 [2024-11-20 07:14:19.039387] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:21.767 request: 00:17:21.767 { 00:17:21.767 "base_bdev": "BaseBdev1", 00:17:21.767 "raid_bdev": "raid_bdev1", 00:17:21.767 "method": "bdev_raid_add_base_bdev", 00:17:21.767 "req_id": 1 00:17:21.767 } 00:17:21.767 Got JSON-RPC error response 00:17:21.767 response: 00:17:21.767 { 00:17:21.767 "code": -22, 00:17:21.767 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:21.767 } 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:21.767 07:14:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.170 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.170 "name": "raid_bdev1", 00:17:23.170 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:23.170 "strip_size_kb": 0, 00:17:23.170 "state": "online", 00:17:23.170 "raid_level": "raid1", 00:17:23.170 "superblock": true, 00:17:23.170 "num_base_bdevs": 2, 00:17:23.170 "num_base_bdevs_discovered": 1, 00:17:23.170 "num_base_bdevs_operational": 1, 00:17:23.170 "base_bdevs_list": [ 00:17:23.170 { 00:17:23.170 "name": null, 00:17:23.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.171 "is_configured": false, 00:17:23.171 "data_offset": 0, 00:17:23.171 "data_size": 63488 00:17:23.171 }, 00:17:23.171 { 00:17:23.171 "name": "BaseBdev2", 00:17:23.171 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:23.171 "is_configured": true, 00:17:23.171 "data_offset": 2048, 00:17:23.171 "data_size": 63488 00:17:23.171 } 00:17:23.171 ] 00:17:23.171 }' 00:17:23.171 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.171 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.430 "name": "raid_bdev1", 00:17:23.430 "uuid": "fabb1774-efdf-4bdf-8dde-2778650c896c", 00:17:23.430 "strip_size_kb": 0, 00:17:23.430 "state": "online", 00:17:23.430 "raid_level": "raid1", 00:17:23.430 "superblock": true, 00:17:23.430 "num_base_bdevs": 2, 00:17:23.430 "num_base_bdevs_discovered": 1, 00:17:23.430 "num_base_bdevs_operational": 1, 00:17:23.430 "base_bdevs_list": [ 00:17:23.430 { 00:17:23.430 "name": null, 00:17:23.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.430 "is_configured": false, 00:17:23.430 "data_offset": 0, 00:17:23.430 "data_size": 63488 00:17:23.430 }, 00:17:23.430 { 00:17:23.430 "name": "BaseBdev2", 00:17:23.430 "uuid": "2a995ab1-b505-5eec-9c8a-53d63b89d4c6", 00:17:23.430 "is_configured": true, 00:17:23.430 "data_offset": 2048, 00:17:23.430 "data_size": 63488 00:17:23.430 } 00:17:23.430 ] 00:17:23.430 }' 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:23.430 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.690 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:23.690 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77008 00:17:23.690 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77008 ']' 00:17:23.690 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77008 00:17:23.690 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:17:23.690 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.690 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77008 00:17:23.690 killing process with pid 77008 00:17:23.690 Received shutdown signal, test time was about 17.993397 seconds 00:17:23.690 00:17:23.690 Latency(us) 00:17:23.690 [2024-11-20T07:14:21.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.690 [2024-11-20T07:14:21.010Z] =================================================================================================================== 00:17:23.690 [2024-11-20T07:14:21.010Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:23.690 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:23.690 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:23.690 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77008' 00:17:23.690 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77008 00:17:23.690 [2024-11-20 07:14:20.790401] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:23.690 07:14:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77008 00:17:23.690 [2024-11-20 07:14:20.790557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.690 [2024-11-20 07:14:20.790631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.690 [2024-11-20 07:14:20.790646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:23.690 [2024-11-20 07:14:20.999343] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:17:25.067 00:17:25.067 real 0m21.362s 00:17:25.067 user 0m29.033s 00:17:25.067 sys 0m2.000s 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:17:25.067 ************************************ 00:17:25.067 END TEST raid_rebuild_test_sb_io 00:17:25.067 ************************************ 00:17:25.067 07:14:22 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:17:25.067 07:14:22 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:17:25.067 07:14:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:25.067 07:14:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:25.067 07:14:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:25.067 ************************************ 00:17:25.067 START TEST raid_rebuild_test 00:17:25.067 ************************************ 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77709 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77709 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77709 ']' 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:25.067 07:14:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.067 [2024-11-20 07:14:22.265247] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:17:25.068 [2024-11-20 07:14:22.265639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77709 ] 00:17:25.068 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:25.068 Zero copy mechanism will not be used. 00:17:25.326 [2024-11-20 07:14:22.447238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.326 [2024-11-20 07:14:22.575577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.585 [2024-11-20 07:14:22.779112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.585 [2024-11-20 07:14:22.779338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.153 BaseBdev1_malloc 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.153 [2024-11-20 07:14:23.264241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:26.153 [2024-11-20 07:14:23.264324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.153 [2024-11-20 07:14:23.264357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:26.153 [2024-11-20 07:14:23.264384] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.153 [2024-11-20 07:14:23.267188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.153 [2024-11-20 07:14:23.267239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:26.153 BaseBdev1 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.153 BaseBdev2_malloc 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.153 [2024-11-20 07:14:23.317104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:26.153 [2024-11-20 07:14:23.317317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.153 [2024-11-20 07:14:23.317355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:26.153 [2024-11-20 07:14:23.317377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.153 [2024-11-20 07:14:23.320184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.153 [2024-11-20 07:14:23.320235] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:26.153 BaseBdev2 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.153 BaseBdev3_malloc 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.153 [2024-11-20 07:14:23.380063] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:26.153 [2024-11-20 07:14:23.380123] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.153 [2024-11-20 07:14:23.380154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:26.153 [2024-11-20 07:14:23.380173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.153 [2024-11-20 07:14:23.382865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.153 [2024-11-20 07:14:23.382940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:26.153 BaseBdev3 00:17:26.153 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.154 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:26.154 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:26.154 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.154 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.154 BaseBdev4_malloc 00:17:26.154 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.154 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:26.154 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.154 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.154 [2024-11-20 07:14:23.432679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:26.154 [2024-11-20 07:14:23.432894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.154 [2024-11-20 07:14:23.432932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:26.154 [2024-11-20 07:14:23.432951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.154 [2024-11-20 07:14:23.435606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.154 [2024-11-20 07:14:23.435658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:26.154 BaseBdev4 00:17:26.154 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.154 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:26.154 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.154 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.412 spare_malloc 00:17:26.412 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.412 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:26.412 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.412 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.412 spare_delay 00:17:26.412 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.413 [2024-11-20 07:14:23.493490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:26.413 [2024-11-20 07:14:23.493593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.413 [2024-11-20 07:14:23.493621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:26.413 [2024-11-20 07:14:23.493638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.413 [2024-11-20 07:14:23.496484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.413 [2024-11-20 07:14:23.496553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:26.413 spare 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.413 [2024-11-20 07:14:23.501546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.413 [2024-11-20 07:14:23.504171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:26.413 [2024-11-20 07:14:23.504274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:26.413 [2024-11-20 07:14:23.504354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:26.413 [2024-11-20 07:14:23.504475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:26.413 [2024-11-20 07:14:23.504497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:17:26.413 [2024-11-20 07:14:23.504806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:26.413 [2024-11-20 07:14:23.505227] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:26.413 [2024-11-20 07:14:23.505366] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:26.413 [2024-11-20 07:14:23.505729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.413 "name": "raid_bdev1", 00:17:26.413 "uuid": "ab0c03c8-a846-4327-81cd-4d3bd718b0ff", 00:17:26.413 "strip_size_kb": 0, 00:17:26.413 "state": "online", 00:17:26.413 "raid_level": "raid1", 00:17:26.413 "superblock": false, 00:17:26.413 "num_base_bdevs": 4, 00:17:26.413 "num_base_bdevs_discovered": 4, 00:17:26.413 "num_base_bdevs_operational": 4, 00:17:26.413 "base_bdevs_list": [ 00:17:26.413 { 00:17:26.413 "name": "BaseBdev1", 00:17:26.413 "uuid": "2cf45bd8-5ad0-5cb1-8828-7803aa94fa60", 00:17:26.413 "is_configured": true, 00:17:26.413 "data_offset": 0, 00:17:26.413 "data_size": 65536 00:17:26.413 }, 00:17:26.413 { 00:17:26.413 "name": "BaseBdev2", 00:17:26.413 "uuid": "3d1a35af-1343-5bb8-8b0a-76f8d0f32a82", 00:17:26.413 "is_configured": true, 00:17:26.413 "data_offset": 0, 00:17:26.413 "data_size": 65536 00:17:26.413 }, 00:17:26.413 { 00:17:26.413 "name": "BaseBdev3", 00:17:26.413 "uuid": "31797a09-5d48-5570-8b6a-1d7848fa5fac", 00:17:26.413 "is_configured": true, 00:17:26.413 "data_offset": 0, 00:17:26.413 "data_size": 65536 00:17:26.413 }, 00:17:26.413 { 00:17:26.413 "name": "BaseBdev4", 00:17:26.413 "uuid": "ef86c682-65a9-52bd-9d44-6bbcfcb3a25a", 00:17:26.413 "is_configured": true, 00:17:26.413 "data_offset": 0, 00:17:26.413 "data_size": 65536 00:17:26.413 } 00:17:26.413 ] 00:17:26.413 }' 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.413 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.982 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:26.982 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.982 07:14:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.982 07:14:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:26.982 [2024-11-20 07:14:24.002263] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:26.982 07:14:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:27.240 [2024-11-20 07:14:24.389991] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:27.240 /dev/nbd0 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.240 1+0 records in 00:17:27.240 1+0 records out 00:17:27.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037706 s, 10.9 MB/s 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:27.240 07:14:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:17:37.211 65536+0 records in 00:17:37.211 65536+0 records out 00:17:37.211 33554432 bytes (34 MB, 32 MiB) copied, 8.48451 s, 4.0 MB/s 00:17:37.211 07:14:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:37.211 07:14:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.211 07:14:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:37.211 07:14:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:37.211 07:14:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:37.211 07:14:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:37.211 07:14:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:37.211 07:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:37.211 [2024-11-20 07:14:33.240397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.211 07:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:37.211 07:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:37.211 07:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:37.211 07:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.212 [2024-11-20 07:14:33.253692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.212 "name": "raid_bdev1", 00:17:37.212 "uuid": "ab0c03c8-a846-4327-81cd-4d3bd718b0ff", 00:17:37.212 "strip_size_kb": 0, 00:17:37.212 "state": "online", 00:17:37.212 "raid_level": "raid1", 00:17:37.212 "superblock": false, 00:17:37.212 "num_base_bdevs": 4, 00:17:37.212 "num_base_bdevs_discovered": 3, 00:17:37.212 "num_base_bdevs_operational": 3, 00:17:37.212 "base_bdevs_list": [ 00:17:37.212 { 00:17:37.212 "name": null, 00:17:37.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.212 "is_configured": false, 00:17:37.212 "data_offset": 0, 00:17:37.212 "data_size": 65536 00:17:37.212 }, 00:17:37.212 { 00:17:37.212 "name": "BaseBdev2", 00:17:37.212 "uuid": "3d1a35af-1343-5bb8-8b0a-76f8d0f32a82", 00:17:37.212 "is_configured": true, 00:17:37.212 "data_offset": 0, 00:17:37.212 "data_size": 65536 00:17:37.212 }, 00:17:37.212 { 00:17:37.212 "name": "BaseBdev3", 00:17:37.212 "uuid": "31797a09-5d48-5570-8b6a-1d7848fa5fac", 00:17:37.212 "is_configured": true, 00:17:37.212 "data_offset": 0, 00:17:37.212 "data_size": 65536 00:17:37.212 }, 00:17:37.212 { 00:17:37.212 "name": "BaseBdev4", 00:17:37.212 "uuid": "ef86c682-65a9-52bd-9d44-6bbcfcb3a25a", 00:17:37.212 "is_configured": true, 00:17:37.212 "data_offset": 0, 00:17:37.212 "data_size": 65536 00:17:37.212 } 00:17:37.212 ] 00:17:37.212 }' 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.212 [2024-11-20 07:14:33.741796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.212 [2024-11-20 07:14:33.756195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.212 07:14:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:37.212 [2024-11-20 07:14:33.758618] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.470 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.470 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.470 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.470 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.470 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.470 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.470 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.470 07:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.470 07:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.470 07:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.730 "name": "raid_bdev1", 00:17:37.730 "uuid": "ab0c03c8-a846-4327-81cd-4d3bd718b0ff", 00:17:37.730 "strip_size_kb": 0, 00:17:37.730 "state": "online", 00:17:37.730 "raid_level": "raid1", 00:17:37.730 "superblock": false, 00:17:37.730 "num_base_bdevs": 4, 00:17:37.730 "num_base_bdevs_discovered": 4, 00:17:37.730 "num_base_bdevs_operational": 4, 00:17:37.730 "process": { 00:17:37.730 "type": "rebuild", 00:17:37.730 "target": "spare", 00:17:37.730 "progress": { 00:17:37.730 "blocks": 20480, 00:17:37.730 "percent": 31 00:17:37.730 } 00:17:37.730 }, 00:17:37.730 "base_bdevs_list": [ 00:17:37.730 { 00:17:37.730 "name": "spare", 00:17:37.730 "uuid": "9c61312b-9de2-5e10-9f00-a47297248874", 00:17:37.730 "is_configured": true, 00:17:37.730 "data_offset": 0, 00:17:37.730 "data_size": 65536 00:17:37.730 }, 00:17:37.730 { 00:17:37.730 "name": "BaseBdev2", 00:17:37.730 "uuid": "3d1a35af-1343-5bb8-8b0a-76f8d0f32a82", 00:17:37.730 "is_configured": true, 00:17:37.730 "data_offset": 0, 00:17:37.730 "data_size": 65536 00:17:37.730 }, 00:17:37.730 { 00:17:37.730 "name": "BaseBdev3", 00:17:37.730 "uuid": "31797a09-5d48-5570-8b6a-1d7848fa5fac", 00:17:37.730 "is_configured": true, 00:17:37.730 "data_offset": 0, 00:17:37.730 "data_size": 65536 00:17:37.730 }, 00:17:37.730 { 00:17:37.730 "name": "BaseBdev4", 00:17:37.730 "uuid": "ef86c682-65a9-52bd-9d44-6bbcfcb3a25a", 00:17:37.730 "is_configured": true, 00:17:37.730 "data_offset": 0, 00:17:37.730 "data_size": 65536 00:17:37.730 } 00:17:37.730 ] 00:17:37.730 }' 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.730 [2024-11-20 07:14:34.923656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.730 [2024-11-20 07:14:34.967461] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:37.730 [2024-11-20 07:14:34.967785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.730 [2024-11-20 07:14:34.967818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.730 [2024-11-20 07:14:34.967835] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.730 07:14:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.730 07:14:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.730 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.730 "name": "raid_bdev1", 00:17:37.730 "uuid": "ab0c03c8-a846-4327-81cd-4d3bd718b0ff", 00:17:37.730 "strip_size_kb": 0, 00:17:37.730 "state": "online", 00:17:37.730 "raid_level": "raid1", 00:17:37.730 "superblock": false, 00:17:37.730 "num_base_bdevs": 4, 00:17:37.730 "num_base_bdevs_discovered": 3, 00:17:37.730 "num_base_bdevs_operational": 3, 00:17:37.730 "base_bdevs_list": [ 00:17:37.730 { 00:17:37.730 "name": null, 00:17:37.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.730 "is_configured": false, 00:17:37.730 "data_offset": 0, 00:17:37.730 "data_size": 65536 00:17:37.730 }, 00:17:37.730 { 00:17:37.730 "name": "BaseBdev2", 00:17:37.730 "uuid": "3d1a35af-1343-5bb8-8b0a-76f8d0f32a82", 00:17:37.730 "is_configured": true, 00:17:37.730 "data_offset": 0, 00:17:37.730 "data_size": 65536 00:17:37.730 }, 00:17:37.730 { 00:17:37.730 "name": "BaseBdev3", 00:17:37.730 "uuid": "31797a09-5d48-5570-8b6a-1d7848fa5fac", 00:17:37.730 "is_configured": true, 00:17:37.730 "data_offset": 0, 00:17:37.730 "data_size": 65536 00:17:37.730 }, 00:17:37.730 { 00:17:37.730 "name": "BaseBdev4", 00:17:37.730 "uuid": "ef86c682-65a9-52bd-9d44-6bbcfcb3a25a", 00:17:37.730 "is_configured": true, 00:17:37.730 "data_offset": 0, 00:17:37.730 "data_size": 65536 00:17:37.730 } 00:17:37.730 ] 00:17:37.730 }' 00:17:37.730 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.730 07:14:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.298 "name": "raid_bdev1", 00:17:38.298 "uuid": "ab0c03c8-a846-4327-81cd-4d3bd718b0ff", 00:17:38.298 "strip_size_kb": 0, 00:17:38.298 "state": "online", 00:17:38.298 "raid_level": "raid1", 00:17:38.298 "superblock": false, 00:17:38.298 "num_base_bdevs": 4, 00:17:38.298 "num_base_bdevs_discovered": 3, 00:17:38.298 "num_base_bdevs_operational": 3, 00:17:38.298 "base_bdevs_list": [ 00:17:38.298 { 00:17:38.298 "name": null, 00:17:38.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.298 "is_configured": false, 00:17:38.298 "data_offset": 0, 00:17:38.298 "data_size": 65536 00:17:38.298 }, 00:17:38.298 { 00:17:38.298 "name": "BaseBdev2", 00:17:38.298 "uuid": "3d1a35af-1343-5bb8-8b0a-76f8d0f32a82", 00:17:38.298 "is_configured": true, 00:17:38.298 "data_offset": 0, 00:17:38.298 "data_size": 65536 00:17:38.298 }, 00:17:38.298 { 00:17:38.298 "name": "BaseBdev3", 00:17:38.298 "uuid": "31797a09-5d48-5570-8b6a-1d7848fa5fac", 00:17:38.298 "is_configured": true, 00:17:38.298 "data_offset": 0, 00:17:38.298 "data_size": 65536 00:17:38.298 }, 00:17:38.298 { 00:17:38.298 "name": "BaseBdev4", 00:17:38.298 "uuid": "ef86c682-65a9-52bd-9d44-6bbcfcb3a25a", 00:17:38.298 "is_configured": true, 00:17:38.298 "data_offset": 0, 00:17:38.298 "data_size": 65536 00:17:38.298 } 00:17:38.298 ] 00:17:38.298 }' 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:38.298 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.557 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:38.557 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:38.557 07:14:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.557 07:14:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.557 [2024-11-20 07:14:35.667896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:38.557 [2024-11-20 07:14:35.681568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:17:38.557 07:14:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.557 07:14:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:38.557 [2024-11-20 07:14:35.684227] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.494 "name": "raid_bdev1", 00:17:39.494 "uuid": "ab0c03c8-a846-4327-81cd-4d3bd718b0ff", 00:17:39.494 "strip_size_kb": 0, 00:17:39.494 "state": "online", 00:17:39.494 "raid_level": "raid1", 00:17:39.494 "superblock": false, 00:17:39.494 "num_base_bdevs": 4, 00:17:39.494 "num_base_bdevs_discovered": 4, 00:17:39.494 "num_base_bdevs_operational": 4, 00:17:39.494 "process": { 00:17:39.494 "type": "rebuild", 00:17:39.494 "target": "spare", 00:17:39.494 "progress": { 00:17:39.494 "blocks": 20480, 00:17:39.494 "percent": 31 00:17:39.494 } 00:17:39.494 }, 00:17:39.494 "base_bdevs_list": [ 00:17:39.494 { 00:17:39.494 "name": "spare", 00:17:39.494 "uuid": "9c61312b-9de2-5e10-9f00-a47297248874", 00:17:39.494 "is_configured": true, 00:17:39.494 "data_offset": 0, 00:17:39.494 "data_size": 65536 00:17:39.494 }, 00:17:39.494 { 00:17:39.494 "name": "BaseBdev2", 00:17:39.494 "uuid": "3d1a35af-1343-5bb8-8b0a-76f8d0f32a82", 00:17:39.494 "is_configured": true, 00:17:39.494 "data_offset": 0, 00:17:39.494 "data_size": 65536 00:17:39.494 }, 00:17:39.494 { 00:17:39.494 "name": "BaseBdev3", 00:17:39.494 "uuid": "31797a09-5d48-5570-8b6a-1d7848fa5fac", 00:17:39.494 "is_configured": true, 00:17:39.494 "data_offset": 0, 00:17:39.494 "data_size": 65536 00:17:39.494 }, 00:17:39.494 { 00:17:39.494 "name": "BaseBdev4", 00:17:39.494 "uuid": "ef86c682-65a9-52bd-9d44-6bbcfcb3a25a", 00:17:39.494 "is_configured": true, 00:17:39.494 "data_offset": 0, 00:17:39.494 "data_size": 65536 00:17:39.494 } 00:17:39.494 ] 00:17:39.494 }' 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.494 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.754 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.754 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.755 [2024-11-20 07:14:36.853334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:39.755 [2024-11-20 07:14:36.893258] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.755 "name": "raid_bdev1", 00:17:39.755 "uuid": "ab0c03c8-a846-4327-81cd-4d3bd718b0ff", 00:17:39.755 "strip_size_kb": 0, 00:17:39.755 "state": "online", 00:17:39.755 "raid_level": "raid1", 00:17:39.755 "superblock": false, 00:17:39.755 "num_base_bdevs": 4, 00:17:39.755 "num_base_bdevs_discovered": 3, 00:17:39.755 "num_base_bdevs_operational": 3, 00:17:39.755 "process": { 00:17:39.755 "type": "rebuild", 00:17:39.755 "target": "spare", 00:17:39.755 "progress": { 00:17:39.755 "blocks": 24576, 00:17:39.755 "percent": 37 00:17:39.755 } 00:17:39.755 }, 00:17:39.755 "base_bdevs_list": [ 00:17:39.755 { 00:17:39.755 "name": "spare", 00:17:39.755 "uuid": "9c61312b-9de2-5e10-9f00-a47297248874", 00:17:39.755 "is_configured": true, 00:17:39.755 "data_offset": 0, 00:17:39.755 "data_size": 65536 00:17:39.755 }, 00:17:39.755 { 00:17:39.755 "name": null, 00:17:39.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.755 "is_configured": false, 00:17:39.755 "data_offset": 0, 00:17:39.755 "data_size": 65536 00:17:39.755 }, 00:17:39.755 { 00:17:39.755 "name": "BaseBdev3", 00:17:39.755 "uuid": "31797a09-5d48-5570-8b6a-1d7848fa5fac", 00:17:39.755 "is_configured": true, 00:17:39.755 "data_offset": 0, 00:17:39.755 "data_size": 65536 00:17:39.755 }, 00:17:39.755 { 00:17:39.755 "name": "BaseBdev4", 00:17:39.755 "uuid": "ef86c682-65a9-52bd-9d44-6bbcfcb3a25a", 00:17:39.755 "is_configured": true, 00:17:39.755 "data_offset": 0, 00:17:39.755 "data_size": 65536 00:17:39.755 } 00:17:39.755 ] 00:17:39.755 }' 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:39.755 07:14:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.755 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:39.755 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=482 00:17:39.755 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.755 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.755 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.755 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.755 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.755 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.755 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.755 07:14:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.755 07:14:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.755 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.755 07:14:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.014 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.014 "name": "raid_bdev1", 00:17:40.014 "uuid": "ab0c03c8-a846-4327-81cd-4d3bd718b0ff", 00:17:40.014 "strip_size_kb": 0, 00:17:40.014 "state": "online", 00:17:40.014 "raid_level": "raid1", 00:17:40.014 "superblock": false, 00:17:40.014 "num_base_bdevs": 4, 00:17:40.014 "num_base_bdevs_discovered": 3, 00:17:40.014 "num_base_bdevs_operational": 3, 00:17:40.014 "process": { 00:17:40.014 "type": "rebuild", 00:17:40.014 "target": "spare", 00:17:40.014 "progress": { 00:17:40.014 "blocks": 26624, 00:17:40.014 "percent": 40 00:17:40.014 } 00:17:40.014 }, 00:17:40.014 "base_bdevs_list": [ 00:17:40.014 { 00:17:40.014 "name": "spare", 00:17:40.014 "uuid": "9c61312b-9de2-5e10-9f00-a47297248874", 00:17:40.014 "is_configured": true, 00:17:40.014 "data_offset": 0, 00:17:40.014 "data_size": 65536 00:17:40.014 }, 00:17:40.014 { 00:17:40.014 "name": null, 00:17:40.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.014 "is_configured": false, 00:17:40.014 "data_offset": 0, 00:17:40.014 "data_size": 65536 00:17:40.014 }, 00:17:40.014 { 00:17:40.014 "name": "BaseBdev3", 00:17:40.014 "uuid": "31797a09-5d48-5570-8b6a-1d7848fa5fac", 00:17:40.014 "is_configured": true, 00:17:40.014 "data_offset": 0, 00:17:40.014 "data_size": 65536 00:17:40.014 }, 00:17:40.014 { 00:17:40.014 "name": "BaseBdev4", 00:17:40.014 "uuid": "ef86c682-65a9-52bd-9d44-6bbcfcb3a25a", 00:17:40.014 "is_configured": true, 00:17:40.014 "data_offset": 0, 00:17:40.014 "data_size": 65536 00:17:40.014 } 00:17:40.014 ] 00:17:40.014 }' 00:17:40.014 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.014 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:40.014 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.014 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.014 07:14:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:40.952 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:40.952 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:40.952 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.952 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:40.952 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:40.952 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.952 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.952 07:14:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.952 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.952 07:14:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.952 07:14:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.952 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.952 "name": "raid_bdev1", 00:17:40.952 "uuid": "ab0c03c8-a846-4327-81cd-4d3bd718b0ff", 00:17:40.952 "strip_size_kb": 0, 00:17:40.952 "state": "online", 00:17:40.952 "raid_level": "raid1", 00:17:40.952 "superblock": false, 00:17:40.952 "num_base_bdevs": 4, 00:17:40.952 "num_base_bdevs_discovered": 3, 00:17:40.952 "num_base_bdevs_operational": 3, 00:17:40.952 "process": { 00:17:40.952 "type": "rebuild", 00:17:40.952 "target": "spare", 00:17:40.952 "progress": { 00:17:40.952 "blocks": 51200, 00:17:40.952 "percent": 78 00:17:40.952 } 00:17:40.952 }, 00:17:40.952 "base_bdevs_list": [ 00:17:40.952 { 00:17:40.952 "name": "spare", 00:17:40.952 "uuid": "9c61312b-9de2-5e10-9f00-a47297248874", 00:17:40.952 "is_configured": true, 00:17:40.952 "data_offset": 0, 00:17:40.952 "data_size": 65536 00:17:40.952 }, 00:17:40.952 { 00:17:40.952 "name": null, 00:17:40.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.952 "is_configured": false, 00:17:40.952 "data_offset": 0, 00:17:40.952 "data_size": 65536 00:17:40.952 }, 00:17:40.952 { 00:17:40.952 "name": "BaseBdev3", 00:17:40.952 "uuid": "31797a09-5d48-5570-8b6a-1d7848fa5fac", 00:17:40.952 "is_configured": true, 00:17:40.952 "data_offset": 0, 00:17:40.952 "data_size": 65536 00:17:40.952 }, 00:17:40.952 { 00:17:40.952 "name": "BaseBdev4", 00:17:40.952 "uuid": "ef86c682-65a9-52bd-9d44-6bbcfcb3a25a", 00:17:40.952 "is_configured": true, 00:17:40.952 "data_offset": 0, 00:17:40.952 "data_size": 65536 00:17:40.952 } 00:17:40.952 ] 00:17:40.952 }' 00:17:40.952 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.210 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.210 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.210 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.210 07:14:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:41.774 [2024-11-20 07:14:38.908108] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:41.775 [2024-11-20 07:14:38.908479] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:41.775 [2024-11-20 07:14:38.908561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.403 "name": "raid_bdev1", 00:17:42.403 "uuid": "ab0c03c8-a846-4327-81cd-4d3bd718b0ff", 00:17:42.403 "strip_size_kb": 0, 00:17:42.403 "state": "online", 00:17:42.403 "raid_level": "raid1", 00:17:42.403 "superblock": false, 00:17:42.403 "num_base_bdevs": 4, 00:17:42.403 "num_base_bdevs_discovered": 3, 00:17:42.403 "num_base_bdevs_operational": 3, 00:17:42.403 "base_bdevs_list": [ 00:17:42.403 { 00:17:42.403 "name": "spare", 00:17:42.403 "uuid": "9c61312b-9de2-5e10-9f00-a47297248874", 00:17:42.403 "is_configured": true, 00:17:42.403 "data_offset": 0, 00:17:42.403 "data_size": 65536 00:17:42.403 }, 00:17:42.403 { 00:17:42.403 "name": null, 00:17:42.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.403 "is_configured": false, 00:17:42.403 "data_offset": 0, 00:17:42.403 "data_size": 65536 00:17:42.403 }, 00:17:42.403 { 00:17:42.403 "name": "BaseBdev3", 00:17:42.403 "uuid": "31797a09-5d48-5570-8b6a-1d7848fa5fac", 00:17:42.403 "is_configured": true, 00:17:42.403 "data_offset": 0, 00:17:42.403 "data_size": 65536 00:17:42.403 }, 00:17:42.403 { 00:17:42.403 "name": "BaseBdev4", 00:17:42.403 "uuid": "ef86c682-65a9-52bd-9d44-6bbcfcb3a25a", 00:17:42.403 "is_configured": true, 00:17:42.403 "data_offset": 0, 00:17:42.403 "data_size": 65536 00:17:42.403 } 00:17:42.403 ] 00:17:42.403 }' 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.403 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.403 "name": "raid_bdev1", 00:17:42.403 "uuid": "ab0c03c8-a846-4327-81cd-4d3bd718b0ff", 00:17:42.403 "strip_size_kb": 0, 00:17:42.403 "state": "online", 00:17:42.403 "raid_level": "raid1", 00:17:42.403 "superblock": false, 00:17:42.403 "num_base_bdevs": 4, 00:17:42.403 "num_base_bdevs_discovered": 3, 00:17:42.403 "num_base_bdevs_operational": 3, 00:17:42.403 "base_bdevs_list": [ 00:17:42.403 { 00:17:42.403 "name": "spare", 00:17:42.403 "uuid": "9c61312b-9de2-5e10-9f00-a47297248874", 00:17:42.403 "is_configured": true, 00:17:42.403 "data_offset": 0, 00:17:42.403 "data_size": 65536 00:17:42.403 }, 00:17:42.403 { 00:17:42.404 "name": null, 00:17:42.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.404 "is_configured": false, 00:17:42.404 "data_offset": 0, 00:17:42.404 "data_size": 65536 00:17:42.404 }, 00:17:42.404 { 00:17:42.404 "name": "BaseBdev3", 00:17:42.404 "uuid": "31797a09-5d48-5570-8b6a-1d7848fa5fac", 00:17:42.404 "is_configured": true, 00:17:42.404 "data_offset": 0, 00:17:42.404 "data_size": 65536 00:17:42.404 }, 00:17:42.404 { 00:17:42.404 "name": "BaseBdev4", 00:17:42.404 "uuid": "ef86c682-65a9-52bd-9d44-6bbcfcb3a25a", 00:17:42.404 "is_configured": true, 00:17:42.404 "data_offset": 0, 00:17:42.404 "data_size": 65536 00:17:42.404 } 00:17:42.404 ] 00:17:42.404 }' 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.404 07:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.662 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.662 "name": "raid_bdev1", 00:17:42.662 "uuid": "ab0c03c8-a846-4327-81cd-4d3bd718b0ff", 00:17:42.662 "strip_size_kb": 0, 00:17:42.662 "state": "online", 00:17:42.662 "raid_level": "raid1", 00:17:42.662 "superblock": false, 00:17:42.662 "num_base_bdevs": 4, 00:17:42.662 "num_base_bdevs_discovered": 3, 00:17:42.662 "num_base_bdevs_operational": 3, 00:17:42.662 "base_bdevs_list": [ 00:17:42.662 { 00:17:42.662 "name": "spare", 00:17:42.662 "uuid": "9c61312b-9de2-5e10-9f00-a47297248874", 00:17:42.662 "is_configured": true, 00:17:42.662 "data_offset": 0, 00:17:42.662 "data_size": 65536 00:17:42.662 }, 00:17:42.662 { 00:17:42.662 "name": null, 00:17:42.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.662 "is_configured": false, 00:17:42.662 "data_offset": 0, 00:17:42.662 "data_size": 65536 00:17:42.662 }, 00:17:42.662 { 00:17:42.662 "name": "BaseBdev3", 00:17:42.662 "uuid": "31797a09-5d48-5570-8b6a-1d7848fa5fac", 00:17:42.662 "is_configured": true, 00:17:42.662 "data_offset": 0, 00:17:42.662 "data_size": 65536 00:17:42.662 }, 00:17:42.662 { 00:17:42.662 "name": "BaseBdev4", 00:17:42.662 "uuid": "ef86c682-65a9-52bd-9d44-6bbcfcb3a25a", 00:17:42.662 "is_configured": true, 00:17:42.662 "data_offset": 0, 00:17:42.662 "data_size": 65536 00:17:42.662 } 00:17:42.662 ] 00:17:42.662 }' 00:17:42.662 07:14:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.662 07:14:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.920 07:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:42.920 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.920 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.920 [2024-11-20 07:14:40.200623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:42.920 [2024-11-20 07:14:40.200661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:42.920 [2024-11-20 07:14:40.200757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.920 [2024-11-20 07:14:40.200914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.920 [2024-11-20 07:14:40.200933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:42.920 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.920 07:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.920 07:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:42.920 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.920 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:42.920 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.178 07:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:43.178 07:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:43.178 07:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:43.178 07:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:43.178 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.178 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:43.178 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:43.178 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:43.178 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:43.178 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:43.178 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:43.178 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.178 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:43.436 /dev/nbd0 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:43.436 1+0 records in 00:17:43.436 1+0 records out 00:17:43.436 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559824 s, 7.3 MB/s 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.436 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:43.694 /dev/nbd1 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:43.694 1+0 records in 00:17:43.694 1+0 records out 00:17:43.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341344 s, 12.0 MB/s 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.694 07:14:40 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:44.020 07:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:44.020 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:44.020 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.020 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:44.020 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:44.020 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:44.020 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:44.280 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:44.280 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:44.280 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:44.280 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:44.280 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:44.280 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:44.280 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:44.280 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:44.280 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:44.280 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77709 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77709 ']' 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77709 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77709 00:17:44.538 killing process with pid 77709 00:17:44.538 Received shutdown signal, test time was about 60.000000 seconds 00:17:44.538 00:17:44.538 Latency(us) 00:17:44.538 [2024-11-20T07:14:41.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.538 [2024-11-20T07:14:41.858Z] =================================================================================================================== 00:17:44.538 [2024-11-20T07:14:41.858Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77709' 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77709 00:17:44.538 [2024-11-20 07:14:41.777215] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.538 07:14:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77709 00:17:45.105 [2024-11-20 07:14:42.218803] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:46.039 00:17:46.039 real 0m21.115s 00:17:46.039 user 0m23.772s 00:17:46.039 sys 0m3.851s 00:17:46.039 ************************************ 00:17:46.039 END TEST raid_rebuild_test 00:17:46.039 ************************************ 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:46.039 07:14:43 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:17:46.039 07:14:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:46.039 07:14:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.039 07:14:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:46.039 ************************************ 00:17:46.039 START TEST raid_rebuild_test_sb 00:17:46.039 ************************************ 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78191 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78191 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78191 ']' 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:46.039 07:14:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.040 07:14:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:46.040 07:14:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.040 07:14:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:46.040 07:14:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.298 [2024-11-20 07:14:43.425552] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:17:46.298 [2024-11-20 07:14:43.425963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78191 ] 00:17:46.298 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:46.298 Zero copy mechanism will not be used. 00:17:46.298 [2024-11-20 07:14:43.600505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.557 [2024-11-20 07:14:43.731517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.816 [2024-11-20 07:14:43.939585] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:46.816 [2024-11-20 07:14:43.939788] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.383 BaseBdev1_malloc 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.383 [2024-11-20 07:14:44.508853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:47.383 [2024-11-20 07:14:44.509017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.383 [2024-11-20 07:14:44.509054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:47.383 [2024-11-20 07:14:44.509086] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.383 [2024-11-20 07:14:44.512089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.383 [2024-11-20 07:14:44.512299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:47.383 BaseBdev1 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.383 BaseBdev2_malloc 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.383 [2024-11-20 07:14:44.563417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:47.383 [2024-11-20 07:14:44.563503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.383 [2024-11-20 07:14:44.563530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:47.383 [2024-11-20 07:14:44.563548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.383 [2024-11-20 07:14:44.566478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.383 [2024-11-20 07:14:44.566541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:47.383 BaseBdev2 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.383 BaseBdev3_malloc 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.383 [2024-11-20 07:14:44.633419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:47.383 [2024-11-20 07:14:44.633693] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.383 [2024-11-20 07:14:44.633750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:47.383 [2024-11-20 07:14:44.633775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.383 [2024-11-20 07:14:44.636756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.383 [2024-11-20 07:14:44.637016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:47.383 BaseBdev3 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.383 BaseBdev4_malloc 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.383 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.383 [2024-11-20 07:14:44.691118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:47.384 [2024-11-20 07:14:44.691191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.384 [2024-11-20 07:14:44.691219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:47.384 [2024-11-20 07:14:44.691237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.384 [2024-11-20 07:14:44.694166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.384 [2024-11-20 07:14:44.694358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:47.384 BaseBdev4 00:17:47.384 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.384 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:47.384 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.384 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.642 spare_malloc 00:17:47.642 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.642 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:47.642 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.642 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.642 spare_delay 00:17:47.642 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.642 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:47.642 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.642 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.643 [2024-11-20 07:14:44.751746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:47.643 [2024-11-20 07:14:44.751840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.643 [2024-11-20 07:14:44.751870] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:47.643 [2024-11-20 07:14:44.751924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.643 [2024-11-20 07:14:44.754844] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.643 [2024-11-20 07:14:44.754944] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:47.643 spare 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.643 [2024-11-20 07:14:44.763815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.643 [2024-11-20 07:14:44.766469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:47.643 [2024-11-20 07:14:44.766563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.643 [2024-11-20 07:14:44.766643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:47.643 [2024-11-20 07:14:44.766914] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:47.643 [2024-11-20 07:14:44.766943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:47.643 [2024-11-20 07:14:44.767262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:47.643 [2024-11-20 07:14:44.767501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:47.643 [2024-11-20 07:14:44.767518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:47.643 [2024-11-20 07:14:44.767755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.643 "name": "raid_bdev1", 00:17:47.643 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:17:47.643 "strip_size_kb": 0, 00:17:47.643 "state": "online", 00:17:47.643 "raid_level": "raid1", 00:17:47.643 "superblock": true, 00:17:47.643 "num_base_bdevs": 4, 00:17:47.643 "num_base_bdevs_discovered": 4, 00:17:47.643 "num_base_bdevs_operational": 4, 00:17:47.643 "base_bdevs_list": [ 00:17:47.643 { 00:17:47.643 "name": "BaseBdev1", 00:17:47.643 "uuid": "c688d501-ee83-52bc-84eb-acda00ca94ff", 00:17:47.643 "is_configured": true, 00:17:47.643 "data_offset": 2048, 00:17:47.643 "data_size": 63488 00:17:47.643 }, 00:17:47.643 { 00:17:47.643 "name": "BaseBdev2", 00:17:47.643 "uuid": "1614fdae-ca3c-5a5b-b379-4c3b8f2a513f", 00:17:47.643 "is_configured": true, 00:17:47.643 "data_offset": 2048, 00:17:47.643 "data_size": 63488 00:17:47.643 }, 00:17:47.643 { 00:17:47.643 "name": "BaseBdev3", 00:17:47.643 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:17:47.643 "is_configured": true, 00:17:47.643 "data_offset": 2048, 00:17:47.643 "data_size": 63488 00:17:47.643 }, 00:17:47.643 { 00:17:47.643 "name": "BaseBdev4", 00:17:47.643 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:17:47.643 "is_configured": true, 00:17:47.643 "data_offset": 2048, 00:17:47.643 "data_size": 63488 00:17:47.643 } 00:17:47.643 ] 00:17:47.643 }' 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.643 07:14:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.209 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:48.209 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.209 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.209 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:48.209 [2024-11-20 07:14:45.300441] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:48.209 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.209 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:17:48.209 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.209 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:48.209 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.209 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.209 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.209 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:48.210 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:48.210 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:48.210 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:48.210 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:48.210 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:48.210 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:48.210 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:48.210 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:48.210 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:48.210 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:48.210 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:48.210 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:48.210 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:48.469 [2024-11-20 07:14:45.632199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:48.469 /dev/nbd0 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:48.469 1+0 records in 00:17:48.469 1+0 records out 00:17:48.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334288 s, 12.3 MB/s 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:48.469 07:14:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:17:56.588 63488+0 records in 00:17:56.588 63488+0 records out 00:17:56.588 32505856 bytes (33 MB, 31 MiB) copied, 8.1012 s, 4.0 MB/s 00:17:56.588 07:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:56.588 07:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:56.588 07:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:56.588 07:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:56.588 07:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:56.588 07:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:56.588 07:14:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:56.847 [2024-11-20 07:14:54.086543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.847 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:56.847 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:56.847 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:56.847 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:56.847 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:56.847 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:56.847 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:56.847 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:56.847 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:56.847 07:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.847 07:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.847 [2024-11-20 07:14:54.118626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.848 07:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.107 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.107 "name": "raid_bdev1", 00:17:57.107 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:17:57.107 "strip_size_kb": 0, 00:17:57.107 "state": "online", 00:17:57.107 "raid_level": "raid1", 00:17:57.107 "superblock": true, 00:17:57.107 "num_base_bdevs": 4, 00:17:57.107 "num_base_bdevs_discovered": 3, 00:17:57.107 "num_base_bdevs_operational": 3, 00:17:57.107 "base_bdevs_list": [ 00:17:57.107 { 00:17:57.107 "name": null, 00:17:57.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.107 "is_configured": false, 00:17:57.107 "data_offset": 0, 00:17:57.107 "data_size": 63488 00:17:57.107 }, 00:17:57.107 { 00:17:57.107 "name": "BaseBdev2", 00:17:57.107 "uuid": "1614fdae-ca3c-5a5b-b379-4c3b8f2a513f", 00:17:57.107 "is_configured": true, 00:17:57.107 "data_offset": 2048, 00:17:57.107 "data_size": 63488 00:17:57.107 }, 00:17:57.107 { 00:17:57.107 "name": "BaseBdev3", 00:17:57.107 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:17:57.107 "is_configured": true, 00:17:57.107 "data_offset": 2048, 00:17:57.107 "data_size": 63488 00:17:57.107 }, 00:17:57.107 { 00:17:57.107 "name": "BaseBdev4", 00:17:57.107 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:17:57.107 "is_configured": true, 00:17:57.107 "data_offset": 2048, 00:17:57.107 "data_size": 63488 00:17:57.107 } 00:17:57.107 ] 00:17:57.107 }' 00:17:57.107 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.107 07:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.365 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:57.365 07:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.366 07:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.366 [2024-11-20 07:14:54.634778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:57.366 [2024-11-20 07:14:54.649147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:17:57.366 07:14:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.366 07:14:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:57.366 [2024-11-20 07:14:54.651695] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.744 "name": "raid_bdev1", 00:17:58.744 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:17:58.744 "strip_size_kb": 0, 00:17:58.744 "state": "online", 00:17:58.744 "raid_level": "raid1", 00:17:58.744 "superblock": true, 00:17:58.744 "num_base_bdevs": 4, 00:17:58.744 "num_base_bdevs_discovered": 4, 00:17:58.744 "num_base_bdevs_operational": 4, 00:17:58.744 "process": { 00:17:58.744 "type": "rebuild", 00:17:58.744 "target": "spare", 00:17:58.744 "progress": { 00:17:58.744 "blocks": 20480, 00:17:58.744 "percent": 32 00:17:58.744 } 00:17:58.744 }, 00:17:58.744 "base_bdevs_list": [ 00:17:58.744 { 00:17:58.744 "name": "spare", 00:17:58.744 "uuid": "df6093cf-61df-580b-861d-535f9fc44555", 00:17:58.744 "is_configured": true, 00:17:58.744 "data_offset": 2048, 00:17:58.744 "data_size": 63488 00:17:58.744 }, 00:17:58.744 { 00:17:58.744 "name": "BaseBdev2", 00:17:58.744 "uuid": "1614fdae-ca3c-5a5b-b379-4c3b8f2a513f", 00:17:58.744 "is_configured": true, 00:17:58.744 "data_offset": 2048, 00:17:58.744 "data_size": 63488 00:17:58.744 }, 00:17:58.744 { 00:17:58.744 "name": "BaseBdev3", 00:17:58.744 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:17:58.744 "is_configured": true, 00:17:58.744 "data_offset": 2048, 00:17:58.744 "data_size": 63488 00:17:58.744 }, 00:17:58.744 { 00:17:58.744 "name": "BaseBdev4", 00:17:58.744 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:17:58.744 "is_configured": true, 00:17:58.744 "data_offset": 2048, 00:17:58.744 "data_size": 63488 00:17:58.744 } 00:17:58.744 ] 00:17:58.744 }' 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.744 [2024-11-20 07:14:55.820981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.744 [2024-11-20 07:14:55.860988] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:58.744 [2024-11-20 07:14:55.861320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.744 [2024-11-20 07:14:55.861545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.744 [2024-11-20 07:14:55.861680] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.744 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.745 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.745 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.745 07:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.745 07:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.745 07:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.745 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.745 "name": "raid_bdev1", 00:17:58.745 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:17:58.745 "strip_size_kb": 0, 00:17:58.745 "state": "online", 00:17:58.745 "raid_level": "raid1", 00:17:58.745 "superblock": true, 00:17:58.745 "num_base_bdevs": 4, 00:17:58.745 "num_base_bdevs_discovered": 3, 00:17:58.745 "num_base_bdevs_operational": 3, 00:17:58.745 "base_bdevs_list": [ 00:17:58.745 { 00:17:58.745 "name": null, 00:17:58.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.745 "is_configured": false, 00:17:58.745 "data_offset": 0, 00:17:58.745 "data_size": 63488 00:17:58.745 }, 00:17:58.745 { 00:17:58.745 "name": "BaseBdev2", 00:17:58.745 "uuid": "1614fdae-ca3c-5a5b-b379-4c3b8f2a513f", 00:17:58.745 "is_configured": true, 00:17:58.745 "data_offset": 2048, 00:17:58.745 "data_size": 63488 00:17:58.745 }, 00:17:58.745 { 00:17:58.745 "name": "BaseBdev3", 00:17:58.745 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:17:58.745 "is_configured": true, 00:17:58.745 "data_offset": 2048, 00:17:58.745 "data_size": 63488 00:17:58.745 }, 00:17:58.745 { 00:17:58.745 "name": "BaseBdev4", 00:17:58.745 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:17:58.745 "is_configured": true, 00:17:58.745 "data_offset": 2048, 00:17:58.745 "data_size": 63488 00:17:58.745 } 00:17:58.745 ] 00:17:58.745 }' 00:17:58.745 07:14:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.745 07:14:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.314 "name": "raid_bdev1", 00:17:59.314 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:17:59.314 "strip_size_kb": 0, 00:17:59.314 "state": "online", 00:17:59.314 "raid_level": "raid1", 00:17:59.314 "superblock": true, 00:17:59.314 "num_base_bdevs": 4, 00:17:59.314 "num_base_bdevs_discovered": 3, 00:17:59.314 "num_base_bdevs_operational": 3, 00:17:59.314 "base_bdevs_list": [ 00:17:59.314 { 00:17:59.314 "name": null, 00:17:59.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.314 "is_configured": false, 00:17:59.314 "data_offset": 0, 00:17:59.314 "data_size": 63488 00:17:59.314 }, 00:17:59.314 { 00:17:59.314 "name": "BaseBdev2", 00:17:59.314 "uuid": "1614fdae-ca3c-5a5b-b379-4c3b8f2a513f", 00:17:59.314 "is_configured": true, 00:17:59.314 "data_offset": 2048, 00:17:59.314 "data_size": 63488 00:17:59.314 }, 00:17:59.314 { 00:17:59.314 "name": "BaseBdev3", 00:17:59.314 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:17:59.314 "is_configured": true, 00:17:59.314 "data_offset": 2048, 00:17:59.314 "data_size": 63488 00:17:59.314 }, 00:17:59.314 { 00:17:59.314 "name": "BaseBdev4", 00:17:59.314 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:17:59.314 "is_configured": true, 00:17:59.314 "data_offset": 2048, 00:17:59.314 "data_size": 63488 00:17:59.314 } 00:17:59.314 ] 00:17:59.314 }' 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.314 [2024-11-20 07:14:56.582328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.314 [2024-11-20 07:14:56.595932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.314 07:14:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:59.314 [2024-11-20 07:14:56.598749] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.694 "name": "raid_bdev1", 00:18:00.694 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:00.694 "strip_size_kb": 0, 00:18:00.694 "state": "online", 00:18:00.694 "raid_level": "raid1", 00:18:00.694 "superblock": true, 00:18:00.694 "num_base_bdevs": 4, 00:18:00.694 "num_base_bdevs_discovered": 4, 00:18:00.694 "num_base_bdevs_operational": 4, 00:18:00.694 "process": { 00:18:00.694 "type": "rebuild", 00:18:00.694 "target": "spare", 00:18:00.694 "progress": { 00:18:00.694 "blocks": 20480, 00:18:00.694 "percent": 32 00:18:00.694 } 00:18:00.694 }, 00:18:00.694 "base_bdevs_list": [ 00:18:00.694 { 00:18:00.694 "name": "spare", 00:18:00.694 "uuid": "df6093cf-61df-580b-861d-535f9fc44555", 00:18:00.694 "is_configured": true, 00:18:00.694 "data_offset": 2048, 00:18:00.694 "data_size": 63488 00:18:00.694 }, 00:18:00.694 { 00:18:00.694 "name": "BaseBdev2", 00:18:00.694 "uuid": "1614fdae-ca3c-5a5b-b379-4c3b8f2a513f", 00:18:00.694 "is_configured": true, 00:18:00.694 "data_offset": 2048, 00:18:00.694 "data_size": 63488 00:18:00.694 }, 00:18:00.694 { 00:18:00.694 "name": "BaseBdev3", 00:18:00.694 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:00.694 "is_configured": true, 00:18:00.694 "data_offset": 2048, 00:18:00.694 "data_size": 63488 00:18:00.694 }, 00:18:00.694 { 00:18:00.694 "name": "BaseBdev4", 00:18:00.694 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:00.694 "is_configured": true, 00:18:00.694 "data_offset": 2048, 00:18:00.694 "data_size": 63488 00:18:00.694 } 00:18:00.694 ] 00:18:00.694 }' 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:00.694 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.694 [2024-11-20 07:14:57.768198] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:00.694 [2024-11-20 07:14:57.908351] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.694 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.694 "name": "raid_bdev1", 00:18:00.694 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:00.694 "strip_size_kb": 0, 00:18:00.694 "state": "online", 00:18:00.694 "raid_level": "raid1", 00:18:00.694 "superblock": true, 00:18:00.694 "num_base_bdevs": 4, 00:18:00.694 "num_base_bdevs_discovered": 3, 00:18:00.694 "num_base_bdevs_operational": 3, 00:18:00.694 "process": { 00:18:00.694 "type": "rebuild", 00:18:00.694 "target": "spare", 00:18:00.694 "progress": { 00:18:00.694 "blocks": 24576, 00:18:00.694 "percent": 38 00:18:00.694 } 00:18:00.694 }, 00:18:00.694 "base_bdevs_list": [ 00:18:00.694 { 00:18:00.694 "name": "spare", 00:18:00.694 "uuid": "df6093cf-61df-580b-861d-535f9fc44555", 00:18:00.694 "is_configured": true, 00:18:00.694 "data_offset": 2048, 00:18:00.694 "data_size": 63488 00:18:00.694 }, 00:18:00.694 { 00:18:00.694 "name": null, 00:18:00.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.694 "is_configured": false, 00:18:00.695 "data_offset": 0, 00:18:00.695 "data_size": 63488 00:18:00.695 }, 00:18:00.695 { 00:18:00.695 "name": "BaseBdev3", 00:18:00.695 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:00.695 "is_configured": true, 00:18:00.695 "data_offset": 2048, 00:18:00.695 "data_size": 63488 00:18:00.695 }, 00:18:00.695 { 00:18:00.695 "name": "BaseBdev4", 00:18:00.695 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:00.695 "is_configured": true, 00:18:00.695 "data_offset": 2048, 00:18:00.695 "data_size": 63488 00:18:00.695 } 00:18:00.695 ] 00:18:00.695 }' 00:18:00.695 07:14:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=503 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.954 "name": "raid_bdev1", 00:18:00.954 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:00.954 "strip_size_kb": 0, 00:18:00.954 "state": "online", 00:18:00.954 "raid_level": "raid1", 00:18:00.954 "superblock": true, 00:18:00.954 "num_base_bdevs": 4, 00:18:00.954 "num_base_bdevs_discovered": 3, 00:18:00.954 "num_base_bdevs_operational": 3, 00:18:00.954 "process": { 00:18:00.954 "type": "rebuild", 00:18:00.954 "target": "spare", 00:18:00.954 "progress": { 00:18:00.954 "blocks": 26624, 00:18:00.954 "percent": 41 00:18:00.954 } 00:18:00.954 }, 00:18:00.954 "base_bdevs_list": [ 00:18:00.954 { 00:18:00.954 "name": "spare", 00:18:00.954 "uuid": "df6093cf-61df-580b-861d-535f9fc44555", 00:18:00.954 "is_configured": true, 00:18:00.954 "data_offset": 2048, 00:18:00.954 "data_size": 63488 00:18:00.954 }, 00:18:00.954 { 00:18:00.954 "name": null, 00:18:00.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.954 "is_configured": false, 00:18:00.954 "data_offset": 0, 00:18:00.954 "data_size": 63488 00:18:00.954 }, 00:18:00.954 { 00:18:00.954 "name": "BaseBdev3", 00:18:00.954 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:00.954 "is_configured": true, 00:18:00.954 "data_offset": 2048, 00:18:00.954 "data_size": 63488 00:18:00.954 }, 00:18:00.954 { 00:18:00.954 "name": "BaseBdev4", 00:18:00.954 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:00.954 "is_configured": true, 00:18:00.954 "data_offset": 2048, 00:18:00.954 "data_size": 63488 00:18:00.954 } 00:18:00.954 ] 00:18:00.954 }' 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.954 07:14:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:02.331 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.331 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.331 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.331 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.331 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.331 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.331 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.331 07:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.331 07:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.331 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.331 07:14:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.332 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.332 "name": "raid_bdev1", 00:18:02.332 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:02.332 "strip_size_kb": 0, 00:18:02.332 "state": "online", 00:18:02.332 "raid_level": "raid1", 00:18:02.332 "superblock": true, 00:18:02.332 "num_base_bdevs": 4, 00:18:02.332 "num_base_bdevs_discovered": 3, 00:18:02.332 "num_base_bdevs_operational": 3, 00:18:02.332 "process": { 00:18:02.332 "type": "rebuild", 00:18:02.332 "target": "spare", 00:18:02.332 "progress": { 00:18:02.332 "blocks": 51200, 00:18:02.332 "percent": 80 00:18:02.332 } 00:18:02.332 }, 00:18:02.332 "base_bdevs_list": [ 00:18:02.332 { 00:18:02.332 "name": "spare", 00:18:02.332 "uuid": "df6093cf-61df-580b-861d-535f9fc44555", 00:18:02.332 "is_configured": true, 00:18:02.332 "data_offset": 2048, 00:18:02.332 "data_size": 63488 00:18:02.332 }, 00:18:02.332 { 00:18:02.332 "name": null, 00:18:02.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.332 "is_configured": false, 00:18:02.332 "data_offset": 0, 00:18:02.332 "data_size": 63488 00:18:02.332 }, 00:18:02.332 { 00:18:02.332 "name": "BaseBdev3", 00:18:02.332 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:02.332 "is_configured": true, 00:18:02.332 "data_offset": 2048, 00:18:02.332 "data_size": 63488 00:18:02.332 }, 00:18:02.332 { 00:18:02.332 "name": "BaseBdev4", 00:18:02.332 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:02.332 "is_configured": true, 00:18:02.332 "data_offset": 2048, 00:18:02.332 "data_size": 63488 00:18:02.332 } 00:18:02.332 ] 00:18:02.332 }' 00:18:02.332 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.332 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.332 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.332 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.332 07:14:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:02.591 [2024-11-20 07:14:59.823061] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:02.591 [2024-11-20 07:14:59.823167] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:02.591 [2024-11-20 07:14:59.823343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.158 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:03.158 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.158 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.158 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:03.158 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:03.159 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.159 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.159 07:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.159 07:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.159 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.159 07:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.159 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.159 "name": "raid_bdev1", 00:18:03.159 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:03.159 "strip_size_kb": 0, 00:18:03.159 "state": "online", 00:18:03.159 "raid_level": "raid1", 00:18:03.159 "superblock": true, 00:18:03.159 "num_base_bdevs": 4, 00:18:03.159 "num_base_bdevs_discovered": 3, 00:18:03.159 "num_base_bdevs_operational": 3, 00:18:03.159 "base_bdevs_list": [ 00:18:03.159 { 00:18:03.159 "name": "spare", 00:18:03.159 "uuid": "df6093cf-61df-580b-861d-535f9fc44555", 00:18:03.159 "is_configured": true, 00:18:03.159 "data_offset": 2048, 00:18:03.159 "data_size": 63488 00:18:03.159 }, 00:18:03.159 { 00:18:03.159 "name": null, 00:18:03.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.159 "is_configured": false, 00:18:03.159 "data_offset": 0, 00:18:03.159 "data_size": 63488 00:18:03.159 }, 00:18:03.159 { 00:18:03.159 "name": "BaseBdev3", 00:18:03.159 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:03.159 "is_configured": true, 00:18:03.159 "data_offset": 2048, 00:18:03.159 "data_size": 63488 00:18:03.159 }, 00:18:03.159 { 00:18:03.159 "name": "BaseBdev4", 00:18:03.159 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:03.159 "is_configured": true, 00:18:03.159 "data_offset": 2048, 00:18:03.159 "data_size": 63488 00:18:03.159 } 00:18:03.159 ] 00:18:03.159 }' 00:18:03.159 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.418 "name": "raid_bdev1", 00:18:03.418 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:03.418 "strip_size_kb": 0, 00:18:03.418 "state": "online", 00:18:03.418 "raid_level": "raid1", 00:18:03.418 "superblock": true, 00:18:03.418 "num_base_bdevs": 4, 00:18:03.418 "num_base_bdevs_discovered": 3, 00:18:03.418 "num_base_bdevs_operational": 3, 00:18:03.418 "base_bdevs_list": [ 00:18:03.418 { 00:18:03.418 "name": "spare", 00:18:03.418 "uuid": "df6093cf-61df-580b-861d-535f9fc44555", 00:18:03.418 "is_configured": true, 00:18:03.418 "data_offset": 2048, 00:18:03.418 "data_size": 63488 00:18:03.418 }, 00:18:03.418 { 00:18:03.418 "name": null, 00:18:03.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.418 "is_configured": false, 00:18:03.418 "data_offset": 0, 00:18:03.418 "data_size": 63488 00:18:03.418 }, 00:18:03.418 { 00:18:03.418 "name": "BaseBdev3", 00:18:03.418 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:03.418 "is_configured": true, 00:18:03.418 "data_offset": 2048, 00:18:03.418 "data_size": 63488 00:18:03.418 }, 00:18:03.418 { 00:18:03.418 "name": "BaseBdev4", 00:18:03.418 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:03.418 "is_configured": true, 00:18:03.418 "data_offset": 2048, 00:18:03.418 "data_size": 63488 00:18:03.418 } 00:18:03.418 ] 00:18:03.418 }' 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.418 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.677 07:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.677 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.677 "name": "raid_bdev1", 00:18:03.677 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:03.677 "strip_size_kb": 0, 00:18:03.677 "state": "online", 00:18:03.677 "raid_level": "raid1", 00:18:03.677 "superblock": true, 00:18:03.677 "num_base_bdevs": 4, 00:18:03.677 "num_base_bdevs_discovered": 3, 00:18:03.677 "num_base_bdevs_operational": 3, 00:18:03.677 "base_bdevs_list": [ 00:18:03.677 { 00:18:03.677 "name": "spare", 00:18:03.677 "uuid": "df6093cf-61df-580b-861d-535f9fc44555", 00:18:03.677 "is_configured": true, 00:18:03.677 "data_offset": 2048, 00:18:03.677 "data_size": 63488 00:18:03.677 }, 00:18:03.677 { 00:18:03.677 "name": null, 00:18:03.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.677 "is_configured": false, 00:18:03.677 "data_offset": 0, 00:18:03.677 "data_size": 63488 00:18:03.677 }, 00:18:03.677 { 00:18:03.677 "name": "BaseBdev3", 00:18:03.677 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:03.677 "is_configured": true, 00:18:03.677 "data_offset": 2048, 00:18:03.677 "data_size": 63488 00:18:03.677 }, 00:18:03.677 { 00:18:03.677 "name": "BaseBdev4", 00:18:03.677 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:03.677 "is_configured": true, 00:18:03.677 "data_offset": 2048, 00:18:03.677 "data_size": 63488 00:18:03.677 } 00:18:03.677 ] 00:18:03.677 }' 00:18:03.677 07:15:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.677 07:15:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.973 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.973 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.973 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.973 [2024-11-20 07:15:01.247530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.973 [2024-11-20 07:15:01.247571] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.973 [2024-11-20 07:15:01.247675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.973 [2024-11-20 07:15:01.247789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.973 [2024-11-20 07:15:01.247807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:03.973 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.973 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.973 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.973 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.973 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:18:03.973 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.231 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:04.231 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:04.231 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:04.231 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:04.231 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:04.231 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:04.231 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:04.231 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:04.231 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:04.231 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:18:04.231 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:04.231 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.231 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:04.489 /dev/nbd0 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.489 1+0 records in 00:18:04.489 1+0 records out 00:18:04.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290934 s, 14.1 MB/s 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.489 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:04.748 /dev/nbd1 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:04.748 1+0 records in 00:18:04.748 1+0 records out 00:18:04.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286549 s, 14.3 MB/s 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:04.748 07:15:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:05.007 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:05.007 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:05.007 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:05.007 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:05.007 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:18:05.007 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.007 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:05.266 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:05.266 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:05.266 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:05.266 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.266 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.266 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:05.266 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:05.266 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.266 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:05.266 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.525 [2024-11-20 07:15:02.728208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:05.525 [2024-11-20 07:15:02.728274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.525 [2024-11-20 07:15:02.728307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:05.525 [2024-11-20 07:15:02.728322] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.525 [2024-11-20 07:15:02.731360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.525 [2024-11-20 07:15:02.731407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:05.525 [2024-11-20 07:15:02.731553] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:05.525 [2024-11-20 07:15:02.731627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.525 [2024-11-20 07:15:02.731812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:05.525 [2024-11-20 07:15:02.731968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:05.525 spare 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.525 [2024-11-20 07:15:02.832101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:05.525 [2024-11-20 07:15:02.832307] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:05.525 [2024-11-20 07:15:02.832728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:18:05.525 [2024-11-20 07:15:02.833011] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:05.525 [2024-11-20 07:15:02.833037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:05.525 [2024-11-20 07:15:02.833260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.525 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.784 07:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.784 07:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.784 07:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.784 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.784 "name": "raid_bdev1", 00:18:05.784 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:05.784 "strip_size_kb": 0, 00:18:05.784 "state": "online", 00:18:05.784 "raid_level": "raid1", 00:18:05.784 "superblock": true, 00:18:05.784 "num_base_bdevs": 4, 00:18:05.784 "num_base_bdevs_discovered": 3, 00:18:05.784 "num_base_bdevs_operational": 3, 00:18:05.784 "base_bdevs_list": [ 00:18:05.784 { 00:18:05.784 "name": "spare", 00:18:05.784 "uuid": "df6093cf-61df-580b-861d-535f9fc44555", 00:18:05.784 "is_configured": true, 00:18:05.784 "data_offset": 2048, 00:18:05.784 "data_size": 63488 00:18:05.784 }, 00:18:05.784 { 00:18:05.784 "name": null, 00:18:05.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.784 "is_configured": false, 00:18:05.784 "data_offset": 2048, 00:18:05.784 "data_size": 63488 00:18:05.784 }, 00:18:05.784 { 00:18:05.784 "name": "BaseBdev3", 00:18:05.784 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:05.784 "is_configured": true, 00:18:05.784 "data_offset": 2048, 00:18:05.784 "data_size": 63488 00:18:05.784 }, 00:18:05.784 { 00:18:05.784 "name": "BaseBdev4", 00:18:05.784 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:05.784 "is_configured": true, 00:18:05.784 "data_offset": 2048, 00:18:05.784 "data_size": 63488 00:18:05.784 } 00:18:05.784 ] 00:18:05.784 }' 00:18:05.784 07:15:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.784 07:15:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.043 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.043 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.043 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:06.043 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:06.043 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.043 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.043 07:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.043 07:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.043 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.043 07:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.303 "name": "raid_bdev1", 00:18:06.303 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:06.303 "strip_size_kb": 0, 00:18:06.303 "state": "online", 00:18:06.303 "raid_level": "raid1", 00:18:06.303 "superblock": true, 00:18:06.303 "num_base_bdevs": 4, 00:18:06.303 "num_base_bdevs_discovered": 3, 00:18:06.303 "num_base_bdevs_operational": 3, 00:18:06.303 "base_bdevs_list": [ 00:18:06.303 { 00:18:06.303 "name": "spare", 00:18:06.303 "uuid": "df6093cf-61df-580b-861d-535f9fc44555", 00:18:06.303 "is_configured": true, 00:18:06.303 "data_offset": 2048, 00:18:06.303 "data_size": 63488 00:18:06.303 }, 00:18:06.303 { 00:18:06.303 "name": null, 00:18:06.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.303 "is_configured": false, 00:18:06.303 "data_offset": 2048, 00:18:06.303 "data_size": 63488 00:18:06.303 }, 00:18:06.303 { 00:18:06.303 "name": "BaseBdev3", 00:18:06.303 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:06.303 "is_configured": true, 00:18:06.303 "data_offset": 2048, 00:18:06.303 "data_size": 63488 00:18:06.303 }, 00:18:06.303 { 00:18:06.303 "name": "BaseBdev4", 00:18:06.303 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:06.303 "is_configured": true, 00:18:06.303 "data_offset": 2048, 00:18:06.303 "data_size": 63488 00:18:06.303 } 00:18:06.303 ] 00:18:06.303 }' 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.303 [2024-11-20 07:15:03.537447] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.303 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.303 "name": "raid_bdev1", 00:18:06.303 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:06.303 "strip_size_kb": 0, 00:18:06.303 "state": "online", 00:18:06.303 "raid_level": "raid1", 00:18:06.303 "superblock": true, 00:18:06.303 "num_base_bdevs": 4, 00:18:06.303 "num_base_bdevs_discovered": 2, 00:18:06.303 "num_base_bdevs_operational": 2, 00:18:06.303 "base_bdevs_list": [ 00:18:06.303 { 00:18:06.303 "name": null, 00:18:06.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.304 "is_configured": false, 00:18:06.304 "data_offset": 0, 00:18:06.304 "data_size": 63488 00:18:06.304 }, 00:18:06.304 { 00:18:06.304 "name": null, 00:18:06.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.304 "is_configured": false, 00:18:06.304 "data_offset": 2048, 00:18:06.304 "data_size": 63488 00:18:06.304 }, 00:18:06.304 { 00:18:06.304 "name": "BaseBdev3", 00:18:06.304 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:06.304 "is_configured": true, 00:18:06.304 "data_offset": 2048, 00:18:06.304 "data_size": 63488 00:18:06.304 }, 00:18:06.304 { 00:18:06.304 "name": "BaseBdev4", 00:18:06.304 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:06.304 "is_configured": true, 00:18:06.304 "data_offset": 2048, 00:18:06.304 "data_size": 63488 00:18:06.304 } 00:18:06.304 ] 00:18:06.304 }' 00:18:06.304 07:15:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.304 07:15:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.879 07:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:06.879 07:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.879 07:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.879 [2024-11-20 07:15:04.029592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.879 [2024-11-20 07:15:04.029820] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:06.879 [2024-11-20 07:15:04.029845] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:06.879 [2024-11-20 07:15:04.030089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.879 [2024-11-20 07:15:04.043378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:18:06.879 07:15:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.879 07:15:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:06.879 [2024-11-20 07:15:04.046015] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.870 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.870 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.870 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.870 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.870 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.870 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.870 07:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.870 07:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.870 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.870 07:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.870 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.870 "name": "raid_bdev1", 00:18:07.870 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:07.870 "strip_size_kb": 0, 00:18:07.870 "state": "online", 00:18:07.870 "raid_level": "raid1", 00:18:07.870 "superblock": true, 00:18:07.870 "num_base_bdevs": 4, 00:18:07.870 "num_base_bdevs_discovered": 3, 00:18:07.870 "num_base_bdevs_operational": 3, 00:18:07.870 "process": { 00:18:07.870 "type": "rebuild", 00:18:07.870 "target": "spare", 00:18:07.870 "progress": { 00:18:07.870 "blocks": 20480, 00:18:07.870 "percent": 32 00:18:07.870 } 00:18:07.870 }, 00:18:07.870 "base_bdevs_list": [ 00:18:07.870 { 00:18:07.870 "name": "spare", 00:18:07.870 "uuid": "df6093cf-61df-580b-861d-535f9fc44555", 00:18:07.870 "is_configured": true, 00:18:07.870 "data_offset": 2048, 00:18:07.870 "data_size": 63488 00:18:07.870 }, 00:18:07.870 { 00:18:07.870 "name": null, 00:18:07.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.870 "is_configured": false, 00:18:07.871 "data_offset": 2048, 00:18:07.871 "data_size": 63488 00:18:07.871 }, 00:18:07.871 { 00:18:07.871 "name": "BaseBdev3", 00:18:07.871 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:07.871 "is_configured": true, 00:18:07.871 "data_offset": 2048, 00:18:07.871 "data_size": 63488 00:18:07.871 }, 00:18:07.871 { 00:18:07.871 "name": "BaseBdev4", 00:18:07.871 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:07.871 "is_configured": true, 00:18:07.871 "data_offset": 2048, 00:18:07.871 "data_size": 63488 00:18:07.871 } 00:18:07.871 ] 00:18:07.871 }' 00:18:07.871 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.871 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.871 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.130 [2024-11-20 07:15:05.235056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.130 [2024-11-20 07:15:05.254948] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:08.130 [2024-11-20 07:15:05.255181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.130 [2024-11-20 07:15:05.255327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:08.130 [2024-11-20 07:15:05.255382] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.130 "name": "raid_bdev1", 00:18:08.130 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:08.130 "strip_size_kb": 0, 00:18:08.130 "state": "online", 00:18:08.130 "raid_level": "raid1", 00:18:08.130 "superblock": true, 00:18:08.130 "num_base_bdevs": 4, 00:18:08.130 "num_base_bdevs_discovered": 2, 00:18:08.130 "num_base_bdevs_operational": 2, 00:18:08.130 "base_bdevs_list": [ 00:18:08.130 { 00:18:08.130 "name": null, 00:18:08.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.130 "is_configured": false, 00:18:08.130 "data_offset": 0, 00:18:08.130 "data_size": 63488 00:18:08.130 }, 00:18:08.130 { 00:18:08.130 "name": null, 00:18:08.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.130 "is_configured": false, 00:18:08.130 "data_offset": 2048, 00:18:08.130 "data_size": 63488 00:18:08.130 }, 00:18:08.130 { 00:18:08.130 "name": "BaseBdev3", 00:18:08.130 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:08.130 "is_configured": true, 00:18:08.130 "data_offset": 2048, 00:18:08.130 "data_size": 63488 00:18:08.130 }, 00:18:08.130 { 00:18:08.130 "name": "BaseBdev4", 00:18:08.130 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:08.130 "is_configured": true, 00:18:08.130 "data_offset": 2048, 00:18:08.130 "data_size": 63488 00:18:08.130 } 00:18:08.130 ] 00:18:08.130 }' 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.130 07:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.698 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:08.698 07:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.698 07:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.698 [2024-11-20 07:15:05.799117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:08.698 [2024-11-20 07:15:05.799350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.698 [2024-11-20 07:15:05.799408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:08.698 [2024-11-20 07:15:05.799426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.698 [2024-11-20 07:15:05.800069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.698 [2024-11-20 07:15:05.800139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:08.698 [2024-11-20 07:15:05.800265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:08.698 [2024-11-20 07:15:05.800286] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:08.698 [2024-11-20 07:15:05.800305] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:08.698 [2024-11-20 07:15:05.800347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:08.698 [2024-11-20 07:15:05.814226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:18:08.698 spare 00:18:08.698 07:15:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.698 07:15:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:08.698 [2024-11-20 07:15:05.816959] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.637 "name": "raid_bdev1", 00:18:09.637 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:09.637 "strip_size_kb": 0, 00:18:09.637 "state": "online", 00:18:09.637 "raid_level": "raid1", 00:18:09.637 "superblock": true, 00:18:09.637 "num_base_bdevs": 4, 00:18:09.637 "num_base_bdevs_discovered": 3, 00:18:09.637 "num_base_bdevs_operational": 3, 00:18:09.637 "process": { 00:18:09.637 "type": "rebuild", 00:18:09.637 "target": "spare", 00:18:09.637 "progress": { 00:18:09.637 "blocks": 20480, 00:18:09.637 "percent": 32 00:18:09.637 } 00:18:09.637 }, 00:18:09.637 "base_bdevs_list": [ 00:18:09.637 { 00:18:09.637 "name": "spare", 00:18:09.637 "uuid": "df6093cf-61df-580b-861d-535f9fc44555", 00:18:09.637 "is_configured": true, 00:18:09.637 "data_offset": 2048, 00:18:09.637 "data_size": 63488 00:18:09.637 }, 00:18:09.637 { 00:18:09.637 "name": null, 00:18:09.637 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.637 "is_configured": false, 00:18:09.637 "data_offset": 2048, 00:18:09.637 "data_size": 63488 00:18:09.637 }, 00:18:09.637 { 00:18:09.637 "name": "BaseBdev3", 00:18:09.637 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:09.637 "is_configured": true, 00:18:09.637 "data_offset": 2048, 00:18:09.637 "data_size": 63488 00:18:09.637 }, 00:18:09.637 { 00:18:09.637 "name": "BaseBdev4", 00:18:09.637 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:09.637 "is_configured": true, 00:18:09.637 "data_offset": 2048, 00:18:09.637 "data_size": 63488 00:18:09.637 } 00:18:09.637 ] 00:18:09.637 }' 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.637 07:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.896 07:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.896 07:15:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:09.896 07:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.896 07:15:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.896 [2024-11-20 07:15:06.990082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.896 [2024-11-20 07:15:07.025781] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:09.896 [2024-11-20 07:15:07.025883] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.896 [2024-11-20 07:15:07.025912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.896 [2024-11-20 07:15:07.025928] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.896 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:09.897 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.897 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.897 "name": "raid_bdev1", 00:18:09.897 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:09.897 "strip_size_kb": 0, 00:18:09.897 "state": "online", 00:18:09.897 "raid_level": "raid1", 00:18:09.897 "superblock": true, 00:18:09.897 "num_base_bdevs": 4, 00:18:09.897 "num_base_bdevs_discovered": 2, 00:18:09.897 "num_base_bdevs_operational": 2, 00:18:09.897 "base_bdevs_list": [ 00:18:09.897 { 00:18:09.897 "name": null, 00:18:09.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.897 "is_configured": false, 00:18:09.897 "data_offset": 0, 00:18:09.897 "data_size": 63488 00:18:09.897 }, 00:18:09.897 { 00:18:09.897 "name": null, 00:18:09.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.897 "is_configured": false, 00:18:09.897 "data_offset": 2048, 00:18:09.897 "data_size": 63488 00:18:09.897 }, 00:18:09.897 { 00:18:09.897 "name": "BaseBdev3", 00:18:09.897 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:09.897 "is_configured": true, 00:18:09.897 "data_offset": 2048, 00:18:09.897 "data_size": 63488 00:18:09.897 }, 00:18:09.897 { 00:18:09.897 "name": "BaseBdev4", 00:18:09.897 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:09.897 "is_configured": true, 00:18:09.897 "data_offset": 2048, 00:18:09.897 "data_size": 63488 00:18:09.897 } 00:18:09.897 ] 00:18:09.897 }' 00:18:09.897 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.897 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:10.465 "name": "raid_bdev1", 00:18:10.465 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:10.465 "strip_size_kb": 0, 00:18:10.465 "state": "online", 00:18:10.465 "raid_level": "raid1", 00:18:10.465 "superblock": true, 00:18:10.465 "num_base_bdevs": 4, 00:18:10.465 "num_base_bdevs_discovered": 2, 00:18:10.465 "num_base_bdevs_operational": 2, 00:18:10.465 "base_bdevs_list": [ 00:18:10.465 { 00:18:10.465 "name": null, 00:18:10.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.465 "is_configured": false, 00:18:10.465 "data_offset": 0, 00:18:10.465 "data_size": 63488 00:18:10.465 }, 00:18:10.465 { 00:18:10.465 "name": null, 00:18:10.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.465 "is_configured": false, 00:18:10.465 "data_offset": 2048, 00:18:10.465 "data_size": 63488 00:18:10.465 }, 00:18:10.465 { 00:18:10.465 "name": "BaseBdev3", 00:18:10.465 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:10.465 "is_configured": true, 00:18:10.465 "data_offset": 2048, 00:18:10.465 "data_size": 63488 00:18:10.465 }, 00:18:10.465 { 00:18:10.465 "name": "BaseBdev4", 00:18:10.465 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:10.465 "is_configured": true, 00:18:10.465 "data_offset": 2048, 00:18:10.465 "data_size": 63488 00:18:10.465 } 00:18:10.465 ] 00:18:10.465 }' 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:10.465 [2024-11-20 07:15:07.705598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:10.465 [2024-11-20 07:15:07.705692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.465 [2024-11-20 07:15:07.705723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:10.465 [2024-11-20 07:15:07.705739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.465 [2024-11-20 07:15:07.706331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.465 [2024-11-20 07:15:07.706380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:10.465 [2024-11-20 07:15:07.706480] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:10.465 [2024-11-20 07:15:07.706507] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:10.465 [2024-11-20 07:15:07.706520] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:10.465 [2024-11-20 07:15:07.706551] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:10.465 BaseBdev1 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.465 07:15:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.403 07:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.663 07:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.663 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.663 "name": "raid_bdev1", 00:18:11.663 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:11.663 "strip_size_kb": 0, 00:18:11.663 "state": "online", 00:18:11.663 "raid_level": "raid1", 00:18:11.663 "superblock": true, 00:18:11.663 "num_base_bdevs": 4, 00:18:11.663 "num_base_bdevs_discovered": 2, 00:18:11.663 "num_base_bdevs_operational": 2, 00:18:11.663 "base_bdevs_list": [ 00:18:11.663 { 00:18:11.663 "name": null, 00:18:11.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.663 "is_configured": false, 00:18:11.663 "data_offset": 0, 00:18:11.663 "data_size": 63488 00:18:11.663 }, 00:18:11.663 { 00:18:11.663 "name": null, 00:18:11.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.663 "is_configured": false, 00:18:11.663 "data_offset": 2048, 00:18:11.663 "data_size": 63488 00:18:11.663 }, 00:18:11.663 { 00:18:11.663 "name": "BaseBdev3", 00:18:11.663 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:11.663 "is_configured": true, 00:18:11.663 "data_offset": 2048, 00:18:11.663 "data_size": 63488 00:18:11.663 }, 00:18:11.663 { 00:18:11.663 "name": "BaseBdev4", 00:18:11.663 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:11.663 "is_configured": true, 00:18:11.663 "data_offset": 2048, 00:18:11.663 "data_size": 63488 00:18:11.663 } 00:18:11.663 ] 00:18:11.663 }' 00:18:11.663 07:15:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.663 07:15:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.921 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.921 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.921 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.921 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.921 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.921 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.921 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.921 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:11.921 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.179 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.179 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.179 "name": "raid_bdev1", 00:18:12.179 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:12.179 "strip_size_kb": 0, 00:18:12.179 "state": "online", 00:18:12.179 "raid_level": "raid1", 00:18:12.179 "superblock": true, 00:18:12.179 "num_base_bdevs": 4, 00:18:12.179 "num_base_bdevs_discovered": 2, 00:18:12.179 "num_base_bdevs_operational": 2, 00:18:12.179 "base_bdevs_list": [ 00:18:12.179 { 00:18:12.179 "name": null, 00:18:12.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.179 "is_configured": false, 00:18:12.179 "data_offset": 0, 00:18:12.179 "data_size": 63488 00:18:12.179 }, 00:18:12.179 { 00:18:12.179 "name": null, 00:18:12.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.179 "is_configured": false, 00:18:12.179 "data_offset": 2048, 00:18:12.179 "data_size": 63488 00:18:12.179 }, 00:18:12.179 { 00:18:12.179 "name": "BaseBdev3", 00:18:12.179 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:12.179 "is_configured": true, 00:18:12.179 "data_offset": 2048, 00:18:12.179 "data_size": 63488 00:18:12.179 }, 00:18:12.179 { 00:18:12.179 "name": "BaseBdev4", 00:18:12.179 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:12.179 "is_configured": true, 00:18:12.179 "data_offset": 2048, 00:18:12.179 "data_size": 63488 00:18:12.179 } 00:18:12.179 ] 00:18:12.179 }' 00:18:12.179 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.179 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:12.180 [2024-11-20 07:15:09.402098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.180 [2024-11-20 07:15:09.402346] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:12.180 [2024-11-20 07:15:09.402368] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:12.180 request: 00:18:12.180 { 00:18:12.180 "base_bdev": "BaseBdev1", 00:18:12.180 "raid_bdev": "raid_bdev1", 00:18:12.180 "method": "bdev_raid_add_base_bdev", 00:18:12.180 "req_id": 1 00:18:12.180 } 00:18:12.180 Got JSON-RPC error response 00:18:12.180 response: 00:18:12.180 { 00:18:12.180 "code": -22, 00:18:12.180 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:12.180 } 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.180 07:15:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.114 07:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.373 07:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.373 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.374 "name": "raid_bdev1", 00:18:13.374 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:13.374 "strip_size_kb": 0, 00:18:13.374 "state": "online", 00:18:13.374 "raid_level": "raid1", 00:18:13.374 "superblock": true, 00:18:13.374 "num_base_bdevs": 4, 00:18:13.374 "num_base_bdevs_discovered": 2, 00:18:13.374 "num_base_bdevs_operational": 2, 00:18:13.374 "base_bdevs_list": [ 00:18:13.374 { 00:18:13.374 "name": null, 00:18:13.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.374 "is_configured": false, 00:18:13.374 "data_offset": 0, 00:18:13.374 "data_size": 63488 00:18:13.374 }, 00:18:13.374 { 00:18:13.374 "name": null, 00:18:13.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.374 "is_configured": false, 00:18:13.374 "data_offset": 2048, 00:18:13.374 "data_size": 63488 00:18:13.374 }, 00:18:13.374 { 00:18:13.374 "name": "BaseBdev3", 00:18:13.374 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:13.374 "is_configured": true, 00:18:13.374 "data_offset": 2048, 00:18:13.374 "data_size": 63488 00:18:13.374 }, 00:18:13.374 { 00:18:13.374 "name": "BaseBdev4", 00:18:13.374 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:13.374 "is_configured": true, 00:18:13.374 "data_offset": 2048, 00:18:13.374 "data_size": 63488 00:18:13.374 } 00:18:13.374 ] 00:18:13.374 }' 00:18:13.374 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.374 07:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.633 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:13.633 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:13.633 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:13.633 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:13.633 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:13.633 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:13.633 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.633 07:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.633 07:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:13.633 07:15:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.892 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:13.892 "name": "raid_bdev1", 00:18:13.892 "uuid": "1271c104-fb14-42f2-9d0d-6ec6779bf90d", 00:18:13.892 "strip_size_kb": 0, 00:18:13.892 "state": "online", 00:18:13.892 "raid_level": "raid1", 00:18:13.892 "superblock": true, 00:18:13.892 "num_base_bdevs": 4, 00:18:13.892 "num_base_bdevs_discovered": 2, 00:18:13.892 "num_base_bdevs_operational": 2, 00:18:13.892 "base_bdevs_list": [ 00:18:13.892 { 00:18:13.892 "name": null, 00:18:13.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.892 "is_configured": false, 00:18:13.892 "data_offset": 0, 00:18:13.892 "data_size": 63488 00:18:13.892 }, 00:18:13.892 { 00:18:13.892 "name": null, 00:18:13.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.892 "is_configured": false, 00:18:13.892 "data_offset": 2048, 00:18:13.892 "data_size": 63488 00:18:13.892 }, 00:18:13.892 { 00:18:13.892 "name": "BaseBdev3", 00:18:13.892 "uuid": "81b807a1-6e3f-561d-aa82-371f9bb2151b", 00:18:13.892 "is_configured": true, 00:18:13.892 "data_offset": 2048, 00:18:13.892 "data_size": 63488 00:18:13.892 }, 00:18:13.892 { 00:18:13.892 "name": "BaseBdev4", 00:18:13.892 "uuid": "da722675-2ad1-5909-baf3-54ce83ec95b0", 00:18:13.892 "is_configured": true, 00:18:13.892 "data_offset": 2048, 00:18:13.892 "data_size": 63488 00:18:13.892 } 00:18:13.892 ] 00:18:13.892 }' 00:18:13.892 07:15:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78191 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78191 ']' 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78191 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78191 00:18:13.892 killing process with pid 78191 00:18:13.892 Received shutdown signal, test time was about 60.000000 seconds 00:18:13.892 00:18:13.892 Latency(us) 00:18:13.892 [2024-11-20T07:15:11.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.892 [2024-11-20T07:15:11.212Z] =================================================================================================================== 00:18:13.892 [2024-11-20T07:15:11.212Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78191' 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78191 00:18:13.892 [2024-11-20 07:15:11.109467] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:13.892 07:15:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78191 00:18:13.892 [2024-11-20 07:15:11.109619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.892 [2024-11-20 07:15:11.109711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.892 [2024-11-20 07:15:11.109728] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:14.460 [2024-11-20 07:15:11.550109] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:15.437 00:18:15.437 real 0m29.267s 00:18:15.437 user 0m35.444s 00:18:15.437 sys 0m3.980s 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:15.437 ************************************ 00:18:15.437 END TEST raid_rebuild_test_sb 00:18:15.437 ************************************ 00:18:15.437 07:15:12 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:18:15.437 07:15:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:15.437 07:15:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.437 07:15:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.437 ************************************ 00:18:15.437 START TEST raid_rebuild_test_io 00:18:15.437 ************************************ 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:15.437 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:15.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78984 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78984 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78984 ']' 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.438 07:15:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:15.438 [2024-11-20 07:15:12.742606] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:18:15.438 [2024-11-20 07:15:12.742974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78984 ] 00:18:15.438 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:15.438 Zero copy mechanism will not be used. 00:18:15.698 [2024-11-20 07:15:12.919399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.957 [2024-11-20 07:15:13.050460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.957 [2024-11-20 07:15:13.259540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.957 [2024-11-20 07:15:13.259829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.525 BaseBdev1_malloc 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.525 [2024-11-20 07:15:13.781410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:16.525 [2024-11-20 07:15:13.781698] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.525 [2024-11-20 07:15:13.781907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:16.525 [2024-11-20 07:15:13.781948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.525 [2024-11-20 07:15:13.784892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.525 [2024-11-20 07:15:13.785015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:16.525 BaseBdev1 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.525 BaseBdev2_malloc 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.525 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.525 [2024-11-20 07:15:13.838079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:16.525 [2024-11-20 07:15:13.838156] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.525 [2024-11-20 07:15:13.838186] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:16.525 [2024-11-20 07:15:13.838206] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.525 [2024-11-20 07:15:13.841092] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.525 [2024-11-20 07:15:13.841290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:16.785 BaseBdev2 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.785 BaseBdev3_malloc 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.785 [2024-11-20 07:15:13.905011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:16.785 [2024-11-20 07:15:13.905307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.785 [2024-11-20 07:15:13.905478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:16.785 [2024-11-20 07:15:13.905625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.785 [2024-11-20 07:15:13.908672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.785 [2024-11-20 07:15:13.908858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:16.785 BaseBdev3 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.785 BaseBdev4_malloc 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.785 [2024-11-20 07:15:13.962713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:16.785 [2024-11-20 07:15:13.963013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.785 [2024-11-20 07:15:13.963064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:16.785 [2024-11-20 07:15:13.963087] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.785 [2024-11-20 07:15:13.965991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.785 [2024-11-20 07:15:13.966184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:16.785 BaseBdev4 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.785 07:15:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.785 spare_malloc 00:18:16.785 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.785 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:16.785 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.785 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.785 spare_delay 00:18:16.785 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.785 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:16.785 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.785 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.785 [2024-11-20 07:15:14.023744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:16.785 [2024-11-20 07:15:14.023857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.785 [2024-11-20 07:15:14.023906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:16.785 [2024-11-20 07:15:14.023928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.785 [2024-11-20 07:15:14.026743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.785 [2024-11-20 07:15:14.026826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:16.785 spare 00:18:16.785 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.785 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:16.785 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.785 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.785 [2024-11-20 07:15:14.031896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.785 [2024-11-20 07:15:14.034459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.785 [2024-11-20 07:15:14.034576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:16.785 [2024-11-20 07:15:14.034656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:16.785 [2024-11-20 07:15:14.034768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:16.785 [2024-11-20 07:15:14.034793] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:16.785 [2024-11-20 07:15:14.035128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:16.785 [2024-11-20 07:15:14.035363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:16.785 [2024-11-20 07:15:14.035393] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:16.786 [2024-11-20 07:15:14.035584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.786 "name": "raid_bdev1", 00:18:16.786 "uuid": "02d55d5e-4cbb-49a2-9ca2-23ffca8678e2", 00:18:16.786 "strip_size_kb": 0, 00:18:16.786 "state": "online", 00:18:16.786 "raid_level": "raid1", 00:18:16.786 "superblock": false, 00:18:16.786 "num_base_bdevs": 4, 00:18:16.786 "num_base_bdevs_discovered": 4, 00:18:16.786 "num_base_bdevs_operational": 4, 00:18:16.786 "base_bdevs_list": [ 00:18:16.786 { 00:18:16.786 "name": "BaseBdev1", 00:18:16.786 "uuid": "86be80d9-f2ce-50c5-9cbf-7cdbbd50d04a", 00:18:16.786 "is_configured": true, 00:18:16.786 "data_offset": 0, 00:18:16.786 "data_size": 65536 00:18:16.786 }, 00:18:16.786 { 00:18:16.786 "name": "BaseBdev2", 00:18:16.786 "uuid": "f0fbe055-8a48-511a-beb2-24e7a0e68e57", 00:18:16.786 "is_configured": true, 00:18:16.786 "data_offset": 0, 00:18:16.786 "data_size": 65536 00:18:16.786 }, 00:18:16.786 { 00:18:16.786 "name": "BaseBdev3", 00:18:16.786 "uuid": "70157a35-2adf-57ee-89a1-6c3e4457eebc", 00:18:16.786 "is_configured": true, 00:18:16.786 "data_offset": 0, 00:18:16.786 "data_size": 65536 00:18:16.786 }, 00:18:16.786 { 00:18:16.786 "name": "BaseBdev4", 00:18:16.786 "uuid": "41632219-ded3-55c8-847e-2818b089b80c", 00:18:16.786 "is_configured": true, 00:18:16.786 "data_offset": 0, 00:18:16.786 "data_size": 65536 00:18:16.786 } 00:18:16.786 ] 00:18:16.786 }' 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.786 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.354 [2024-11-20 07:15:14.552428] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.354 [2024-11-20 07:15:14.660019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.354 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.355 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.614 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.614 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.614 "name": "raid_bdev1", 00:18:17.614 "uuid": "02d55d5e-4cbb-49a2-9ca2-23ffca8678e2", 00:18:17.614 "strip_size_kb": 0, 00:18:17.614 "state": "online", 00:18:17.614 "raid_level": "raid1", 00:18:17.614 "superblock": false, 00:18:17.614 "num_base_bdevs": 4, 00:18:17.614 "num_base_bdevs_discovered": 3, 00:18:17.614 "num_base_bdevs_operational": 3, 00:18:17.614 "base_bdevs_list": [ 00:18:17.614 { 00:18:17.614 "name": null, 00:18:17.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.614 "is_configured": false, 00:18:17.614 "data_offset": 0, 00:18:17.614 "data_size": 65536 00:18:17.614 }, 00:18:17.614 { 00:18:17.614 "name": "BaseBdev2", 00:18:17.614 "uuid": "f0fbe055-8a48-511a-beb2-24e7a0e68e57", 00:18:17.614 "is_configured": true, 00:18:17.614 "data_offset": 0, 00:18:17.614 "data_size": 65536 00:18:17.614 }, 00:18:17.614 { 00:18:17.614 "name": "BaseBdev3", 00:18:17.614 "uuid": "70157a35-2adf-57ee-89a1-6c3e4457eebc", 00:18:17.614 "is_configured": true, 00:18:17.614 "data_offset": 0, 00:18:17.614 "data_size": 65536 00:18:17.614 }, 00:18:17.614 { 00:18:17.614 "name": "BaseBdev4", 00:18:17.614 "uuid": "41632219-ded3-55c8-847e-2818b089b80c", 00:18:17.614 "is_configured": true, 00:18:17.614 "data_offset": 0, 00:18:17.614 "data_size": 65536 00:18:17.614 } 00:18:17.614 ] 00:18:17.614 }' 00:18:17.614 07:15:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.614 07:15:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:17.614 [2024-11-20 07:15:14.792245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:17.614 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:17.614 Zero copy mechanism will not be used. 00:18:17.614 Running I/O for 60 seconds... 00:18:18.246 07:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:18.246 07:15:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.246 07:15:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:18.246 [2024-11-20 07:15:15.227723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:18.246 07:15:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.246 07:15:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:18.246 [2024-11-20 07:15:15.292988] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:18.246 [2024-11-20 07:15:15.295763] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:18.246 [2024-11-20 07:15:15.422027] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:18.246 [2024-11-20 07:15:15.431325] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:18.504 [2024-11-20 07:15:15.656684] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:18.505 [2024-11-20 07:15:15.657577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:18.763 159.00 IOPS, 477.00 MiB/s [2024-11-20T07:15:16.083Z] [2024-11-20 07:15:16.028715] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:19.021 [2024-11-20 07:15:16.251574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:19.021 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:19.021 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:19.021 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:19.021 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:19.021 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:19.021 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.021 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.021 07:15:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.021 07:15:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.021 07:15:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.021 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:19.021 "name": "raid_bdev1", 00:18:19.021 "uuid": "02d55d5e-4cbb-49a2-9ca2-23ffca8678e2", 00:18:19.021 "strip_size_kb": 0, 00:18:19.021 "state": "online", 00:18:19.021 "raid_level": "raid1", 00:18:19.021 "superblock": false, 00:18:19.021 "num_base_bdevs": 4, 00:18:19.021 "num_base_bdevs_discovered": 4, 00:18:19.021 "num_base_bdevs_operational": 4, 00:18:19.021 "process": { 00:18:19.021 "type": "rebuild", 00:18:19.021 "target": "spare", 00:18:19.021 "progress": { 00:18:19.022 "blocks": 10240, 00:18:19.022 "percent": 15 00:18:19.022 } 00:18:19.022 }, 00:18:19.022 "base_bdevs_list": [ 00:18:19.022 { 00:18:19.022 "name": "spare", 00:18:19.022 "uuid": "f2e14908-7e09-51e8-a198-b4abeb515929", 00:18:19.022 "is_configured": true, 00:18:19.022 "data_offset": 0, 00:18:19.022 "data_size": 65536 00:18:19.022 }, 00:18:19.022 { 00:18:19.022 "name": "BaseBdev2", 00:18:19.022 "uuid": "f0fbe055-8a48-511a-beb2-24e7a0e68e57", 00:18:19.022 "is_configured": true, 00:18:19.022 "data_offset": 0, 00:18:19.022 "data_size": 65536 00:18:19.022 }, 00:18:19.022 { 00:18:19.022 "name": "BaseBdev3", 00:18:19.022 "uuid": "70157a35-2adf-57ee-89a1-6c3e4457eebc", 00:18:19.022 "is_configured": true, 00:18:19.022 "data_offset": 0, 00:18:19.022 "data_size": 65536 00:18:19.022 }, 00:18:19.022 { 00:18:19.022 "name": "BaseBdev4", 00:18:19.022 "uuid": "41632219-ded3-55c8-847e-2818b089b80c", 00:18:19.022 "is_configured": true, 00:18:19.022 "data_offset": 0, 00:18:19.022 "data_size": 65536 00:18:19.022 } 00:18:19.022 ] 00:18:19.022 }' 00:18:19.022 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:19.281 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:19.281 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:19.281 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:19.281 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:19.281 07:15:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.281 07:15:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.281 [2024-11-20 07:15:16.434835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.281 [2024-11-20 07:15:16.584675] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:19.281 [2024-11-20 07:15:16.589467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.281 [2024-11-20 07:15:16.589532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:19.281 [2024-11-20 07:15:16.589555] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:19.540 [2024-11-20 07:15:16.631281] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.540 "name": "raid_bdev1", 00:18:19.540 "uuid": "02d55d5e-4cbb-49a2-9ca2-23ffca8678e2", 00:18:19.540 "strip_size_kb": 0, 00:18:19.540 "state": "online", 00:18:19.540 "raid_level": "raid1", 00:18:19.540 "superblock": false, 00:18:19.540 "num_base_bdevs": 4, 00:18:19.540 "num_base_bdevs_discovered": 3, 00:18:19.540 "num_base_bdevs_operational": 3, 00:18:19.540 "base_bdevs_list": [ 00:18:19.540 { 00:18:19.540 "name": null, 00:18:19.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:19.540 "is_configured": false, 00:18:19.540 "data_offset": 0, 00:18:19.540 "data_size": 65536 00:18:19.540 }, 00:18:19.540 { 00:18:19.540 "name": "BaseBdev2", 00:18:19.540 "uuid": "f0fbe055-8a48-511a-beb2-24e7a0e68e57", 00:18:19.540 "is_configured": true, 00:18:19.540 "data_offset": 0, 00:18:19.540 "data_size": 65536 00:18:19.540 }, 00:18:19.540 { 00:18:19.540 "name": "BaseBdev3", 00:18:19.540 "uuid": "70157a35-2adf-57ee-89a1-6c3e4457eebc", 00:18:19.540 "is_configured": true, 00:18:19.540 "data_offset": 0, 00:18:19.540 "data_size": 65536 00:18:19.540 }, 00:18:19.540 { 00:18:19.540 "name": "BaseBdev4", 00:18:19.540 "uuid": "41632219-ded3-55c8-847e-2818b089b80c", 00:18:19.540 "is_configured": true, 00:18:19.540 "data_offset": 0, 00:18:19.540 "data_size": 65536 00:18:19.540 } 00:18:19.540 ] 00:18:19.540 }' 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.540 07:15:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.108 115.00 IOPS, 345.00 MiB/s [2024-11-20T07:15:17.428Z] 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:20.108 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:20.108 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:20.108 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:20.108 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:20.108 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.108 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.108 07:15:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.108 07:15:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.108 07:15:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.108 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:20.108 "name": "raid_bdev1", 00:18:20.109 "uuid": "02d55d5e-4cbb-49a2-9ca2-23ffca8678e2", 00:18:20.109 "strip_size_kb": 0, 00:18:20.109 "state": "online", 00:18:20.109 "raid_level": "raid1", 00:18:20.109 "superblock": false, 00:18:20.109 "num_base_bdevs": 4, 00:18:20.109 "num_base_bdevs_discovered": 3, 00:18:20.109 "num_base_bdevs_operational": 3, 00:18:20.109 "base_bdevs_list": [ 00:18:20.109 { 00:18:20.109 "name": null, 00:18:20.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.109 "is_configured": false, 00:18:20.109 "data_offset": 0, 00:18:20.109 "data_size": 65536 00:18:20.109 }, 00:18:20.109 { 00:18:20.109 "name": "BaseBdev2", 00:18:20.109 "uuid": "f0fbe055-8a48-511a-beb2-24e7a0e68e57", 00:18:20.109 "is_configured": true, 00:18:20.109 "data_offset": 0, 00:18:20.109 "data_size": 65536 00:18:20.109 }, 00:18:20.109 { 00:18:20.109 "name": "BaseBdev3", 00:18:20.109 "uuid": "70157a35-2adf-57ee-89a1-6c3e4457eebc", 00:18:20.109 "is_configured": true, 00:18:20.109 "data_offset": 0, 00:18:20.109 "data_size": 65536 00:18:20.109 }, 00:18:20.109 { 00:18:20.109 "name": "BaseBdev4", 00:18:20.109 "uuid": "41632219-ded3-55c8-847e-2818b089b80c", 00:18:20.109 "is_configured": true, 00:18:20.109 "data_offset": 0, 00:18:20.109 "data_size": 65536 00:18:20.109 } 00:18:20.109 ] 00:18:20.109 }' 00:18:20.109 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:20.109 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:20.109 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:20.109 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:20.109 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:20.109 07:15:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.109 07:15:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:20.109 [2024-11-20 07:15:17.337333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:20.109 07:15:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.109 07:15:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:20.109 [2024-11-20 07:15:17.376465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:20.109 [2024-11-20 07:15:17.379058] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:20.366 [2024-11-20 07:15:17.500413] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:20.366 [2024-11-20 07:15:17.502243] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:20.625 [2024-11-20 07:15:17.716155] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:20.625 [2024-11-20 07:15:17.717320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:20.883 123.00 IOPS, 369.00 MiB/s [2024-11-20T07:15:18.203Z] [2024-11-20 07:15:18.085274] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:21.143 [2024-11-20 07:15:18.243496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:21.143 [2024-11-20 07:15:18.244576] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:21.143 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.143 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.143 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.143 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.143 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.143 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.143 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.143 07:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.143 07:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.143 07:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.143 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.143 "name": "raid_bdev1", 00:18:21.143 "uuid": "02d55d5e-4cbb-49a2-9ca2-23ffca8678e2", 00:18:21.143 "strip_size_kb": 0, 00:18:21.143 "state": "online", 00:18:21.143 "raid_level": "raid1", 00:18:21.143 "superblock": false, 00:18:21.143 "num_base_bdevs": 4, 00:18:21.143 "num_base_bdevs_discovered": 4, 00:18:21.143 "num_base_bdevs_operational": 4, 00:18:21.143 "process": { 00:18:21.143 "type": "rebuild", 00:18:21.143 "target": "spare", 00:18:21.143 "progress": { 00:18:21.143 "blocks": 10240, 00:18:21.143 "percent": 15 00:18:21.143 } 00:18:21.143 }, 00:18:21.143 "base_bdevs_list": [ 00:18:21.143 { 00:18:21.143 "name": "spare", 00:18:21.143 "uuid": "f2e14908-7e09-51e8-a198-b4abeb515929", 00:18:21.143 "is_configured": true, 00:18:21.143 "data_offset": 0, 00:18:21.143 "data_size": 65536 00:18:21.143 }, 00:18:21.143 { 00:18:21.143 "name": "BaseBdev2", 00:18:21.143 "uuid": "f0fbe055-8a48-511a-beb2-24e7a0e68e57", 00:18:21.143 "is_configured": true, 00:18:21.143 "data_offset": 0, 00:18:21.143 "data_size": 65536 00:18:21.143 }, 00:18:21.143 { 00:18:21.143 "name": "BaseBdev3", 00:18:21.143 "uuid": "70157a35-2adf-57ee-89a1-6c3e4457eebc", 00:18:21.143 "is_configured": true, 00:18:21.143 "data_offset": 0, 00:18:21.143 "data_size": 65536 00:18:21.143 }, 00:18:21.143 { 00:18:21.143 "name": "BaseBdev4", 00:18:21.143 "uuid": "41632219-ded3-55c8-847e-2818b089b80c", 00:18:21.143 "is_configured": true, 00:18:21.143 "data_offset": 0, 00:18:21.143 "data_size": 65536 00:18:21.143 } 00:18:21.143 ] 00:18:21.143 }' 00:18:21.143 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.402 [2024-11-20 07:15:18.538515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:21.402 [2024-11-20 07:15:18.594663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:21.402 [2024-11-20 07:15:18.595410] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:21.402 [2024-11-20 07:15:18.706142] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:18:21.402 [2024-11-20 07:15:18.706348] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:18:21.402 [2024-11-20 07:15:18.710853] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.402 07:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.663 07:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.663 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.663 "name": "raid_bdev1", 00:18:21.663 "uuid": "02d55d5e-4cbb-49a2-9ca2-23ffca8678e2", 00:18:21.663 "strip_size_kb": 0, 00:18:21.663 "state": "online", 00:18:21.663 "raid_level": "raid1", 00:18:21.663 "superblock": false, 00:18:21.663 "num_base_bdevs": 4, 00:18:21.663 "num_base_bdevs_discovered": 3, 00:18:21.663 "num_base_bdevs_operational": 3, 00:18:21.663 "process": { 00:18:21.663 "type": "rebuild", 00:18:21.663 "target": "spare", 00:18:21.663 "progress": { 00:18:21.663 "blocks": 14336, 00:18:21.663 "percent": 21 00:18:21.663 } 00:18:21.663 }, 00:18:21.663 "base_bdevs_list": [ 00:18:21.663 { 00:18:21.663 "name": "spare", 00:18:21.663 "uuid": "f2e14908-7e09-51e8-a198-b4abeb515929", 00:18:21.663 "is_configured": true, 00:18:21.663 "data_offset": 0, 00:18:21.663 "data_size": 65536 00:18:21.663 }, 00:18:21.663 { 00:18:21.663 "name": null, 00:18:21.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.663 "is_configured": false, 00:18:21.663 "data_offset": 0, 00:18:21.663 "data_size": 65536 00:18:21.663 }, 00:18:21.663 { 00:18:21.663 "name": "BaseBdev3", 00:18:21.663 "uuid": "70157a35-2adf-57ee-89a1-6c3e4457eebc", 00:18:21.663 "is_configured": true, 00:18:21.663 "data_offset": 0, 00:18:21.663 "data_size": 65536 00:18:21.663 }, 00:18:21.663 { 00:18:21.663 "name": "BaseBdev4", 00:18:21.663 "uuid": "41632219-ded3-55c8-847e-2818b089b80c", 00:18:21.663 "is_configured": true, 00:18:21.664 "data_offset": 0, 00:18:21.664 "data_size": 65536 00:18:21.664 } 00:18:21.664 ] 00:18:21.664 }' 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.664 114.50 IOPS, 343.50 MiB/s [2024-11-20T07:15:18.984Z] 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.664 [2024-11-20 07:15:18.838210] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:21.664 [2024-11-20 07:15:18.838839] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=523 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:21.664 "name": "raid_bdev1", 00:18:21.664 "uuid": "02d55d5e-4cbb-49a2-9ca2-23ffca8678e2", 00:18:21.664 "strip_size_kb": 0, 00:18:21.664 "state": "online", 00:18:21.664 "raid_level": "raid1", 00:18:21.664 "superblock": false, 00:18:21.664 "num_base_bdevs": 4, 00:18:21.664 "num_base_bdevs_discovered": 3, 00:18:21.664 "num_base_bdevs_operational": 3, 00:18:21.664 "process": { 00:18:21.664 "type": "rebuild", 00:18:21.664 "target": "spare", 00:18:21.664 "progress": { 00:18:21.664 "blocks": 16384, 00:18:21.664 "percent": 25 00:18:21.664 } 00:18:21.664 }, 00:18:21.664 "base_bdevs_list": [ 00:18:21.664 { 00:18:21.664 "name": "spare", 00:18:21.664 "uuid": "f2e14908-7e09-51e8-a198-b4abeb515929", 00:18:21.664 "is_configured": true, 00:18:21.664 "data_offset": 0, 00:18:21.664 "data_size": 65536 00:18:21.664 }, 00:18:21.664 { 00:18:21.664 "name": null, 00:18:21.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.664 "is_configured": false, 00:18:21.664 "data_offset": 0, 00:18:21.664 "data_size": 65536 00:18:21.664 }, 00:18:21.664 { 00:18:21.664 "name": "BaseBdev3", 00:18:21.664 "uuid": "70157a35-2adf-57ee-89a1-6c3e4457eebc", 00:18:21.664 "is_configured": true, 00:18:21.664 "data_offset": 0, 00:18:21.664 "data_size": 65536 00:18:21.664 }, 00:18:21.664 { 00:18:21.664 "name": "BaseBdev4", 00:18:21.664 "uuid": "41632219-ded3-55c8-847e-2818b089b80c", 00:18:21.664 "is_configured": true, 00:18:21.664 "data_offset": 0, 00:18:21.664 "data_size": 65536 00:18:21.664 } 00:18:21.664 ] 00:18:21.664 }' 00:18:21.664 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:21.925 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:21.925 07:15:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:21.926 07:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:21.926 07:15:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:21.926 [2024-11-20 07:15:19.202649] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:22.185 [2024-11-20 07:15:19.314397] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:22.443 [2024-11-20 07:15:19.572385] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:22.443 [2024-11-20 07:15:19.573623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:22.702 [2024-11-20 07:15:19.796947] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:18:22.961 103.60 IOPS, 310.80 MiB/s [2024-11-20T07:15:20.281Z] 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:22.961 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.961 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:22.961 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:22.961 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:22.961 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:22.961 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.961 07:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.961 07:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:22.961 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.961 07:15:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.961 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:22.961 "name": "raid_bdev1", 00:18:22.961 "uuid": "02d55d5e-4cbb-49a2-9ca2-23ffca8678e2", 00:18:22.961 "strip_size_kb": 0, 00:18:22.961 "state": "online", 00:18:22.961 "raid_level": "raid1", 00:18:22.961 "superblock": false, 00:18:22.961 "num_base_bdevs": 4, 00:18:22.961 "num_base_bdevs_discovered": 3, 00:18:22.961 "num_base_bdevs_operational": 3, 00:18:22.961 "process": { 00:18:22.961 "type": "rebuild", 00:18:22.961 "target": "spare", 00:18:22.961 "progress": { 00:18:22.961 "blocks": 32768, 00:18:22.962 "percent": 50 00:18:22.962 } 00:18:22.962 }, 00:18:22.962 "base_bdevs_list": [ 00:18:22.962 { 00:18:22.962 "name": "spare", 00:18:22.962 "uuid": "f2e14908-7e09-51e8-a198-b4abeb515929", 00:18:22.962 "is_configured": true, 00:18:22.962 "data_offset": 0, 00:18:22.962 "data_size": 65536 00:18:22.962 }, 00:18:22.962 { 00:18:22.962 "name": null, 00:18:22.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.962 "is_configured": false, 00:18:22.962 "data_offset": 0, 00:18:22.962 "data_size": 65536 00:18:22.962 }, 00:18:22.962 { 00:18:22.962 "name": "BaseBdev3", 00:18:22.962 "uuid": "70157a35-2adf-57ee-89a1-6c3e4457eebc", 00:18:22.962 "is_configured": true, 00:18:22.962 "data_offset": 0, 00:18:22.962 "data_size": 65536 00:18:22.962 }, 00:18:22.962 { 00:18:22.962 "name": "BaseBdev4", 00:18:22.962 "uuid": "41632219-ded3-55c8-847e-2818b089b80c", 00:18:22.962 "is_configured": true, 00:18:22.962 "data_offset": 0, 00:18:22.962 "data_size": 65536 00:18:22.962 } 00:18:22.962 ] 00:18:22.962 }' 00:18:22.962 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:22.962 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.962 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:22.962 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.962 07:15:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:23.221 [2024-11-20 07:15:20.341007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:23.221 [2024-11-20 07:15:20.341704] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:18:23.221 [2024-11-20 07:15:20.453568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:18:24.048 92.67 IOPS, 278.00 MiB/s [2024-11-20T07:15:21.368Z] [2024-11-20 07:15:21.128150] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.048 "name": "raid_bdev1", 00:18:24.048 "uuid": "02d55d5e-4cbb-49a2-9ca2-23ffca8678e2", 00:18:24.048 "strip_size_kb": 0, 00:18:24.048 "state": "online", 00:18:24.048 "raid_level": "raid1", 00:18:24.048 "superblock": false, 00:18:24.048 "num_base_bdevs": 4, 00:18:24.048 "num_base_bdevs_discovered": 3, 00:18:24.048 "num_base_bdevs_operational": 3, 00:18:24.048 "process": { 00:18:24.048 "type": "rebuild", 00:18:24.048 "target": "spare", 00:18:24.048 "progress": { 00:18:24.048 "blocks": 53248, 00:18:24.048 "percent": 81 00:18:24.048 } 00:18:24.048 }, 00:18:24.048 "base_bdevs_list": [ 00:18:24.048 { 00:18:24.048 "name": "spare", 00:18:24.048 "uuid": "f2e14908-7e09-51e8-a198-b4abeb515929", 00:18:24.048 "is_configured": true, 00:18:24.048 "data_offset": 0, 00:18:24.048 "data_size": 65536 00:18:24.048 }, 00:18:24.048 { 00:18:24.048 "name": null, 00:18:24.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.048 "is_configured": false, 00:18:24.048 "data_offset": 0, 00:18:24.048 "data_size": 65536 00:18:24.048 }, 00:18:24.048 { 00:18:24.048 "name": "BaseBdev3", 00:18:24.048 "uuid": "70157a35-2adf-57ee-89a1-6c3e4457eebc", 00:18:24.048 "is_configured": true, 00:18:24.048 "data_offset": 0, 00:18:24.048 "data_size": 65536 00:18:24.048 }, 00:18:24.048 { 00:18:24.048 "name": "BaseBdev4", 00:18:24.048 "uuid": "41632219-ded3-55c8-847e-2818b089b80c", 00:18:24.048 "is_configured": true, 00:18:24.048 "data_offset": 0, 00:18:24.048 "data_size": 65536 00:18:24.048 } 00:18:24.048 ] 00:18:24.048 }' 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:24.048 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.307 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:24.307 07:15:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:24.823 82.29 IOPS, 246.86 MiB/s [2024-11-20T07:15:22.143Z] [2024-11-20 07:15:21.910496] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:24.823 [2024-11-20 07:15:22.018252] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:24.823 [2024-11-20 07:15:22.021780] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.081 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.081 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.081 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.081 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.081 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.081 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.081 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.081 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.081 07:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.081 07:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.081 07:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.398 "name": "raid_bdev1", 00:18:25.398 "uuid": "02d55d5e-4cbb-49a2-9ca2-23ffca8678e2", 00:18:25.398 "strip_size_kb": 0, 00:18:25.398 "state": "online", 00:18:25.398 "raid_level": "raid1", 00:18:25.398 "superblock": false, 00:18:25.398 "num_base_bdevs": 4, 00:18:25.398 "num_base_bdevs_discovered": 3, 00:18:25.398 "num_base_bdevs_operational": 3, 00:18:25.398 "base_bdevs_list": [ 00:18:25.398 { 00:18:25.398 "name": "spare", 00:18:25.398 "uuid": "f2e14908-7e09-51e8-a198-b4abeb515929", 00:18:25.398 "is_configured": true, 00:18:25.398 "data_offset": 0, 00:18:25.398 "data_size": 65536 00:18:25.398 }, 00:18:25.398 { 00:18:25.398 "name": null, 00:18:25.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.398 "is_configured": false, 00:18:25.398 "data_offset": 0, 00:18:25.398 "data_size": 65536 00:18:25.398 }, 00:18:25.398 { 00:18:25.398 "name": "BaseBdev3", 00:18:25.398 "uuid": "70157a35-2adf-57ee-89a1-6c3e4457eebc", 00:18:25.398 "is_configured": true, 00:18:25.398 "data_offset": 0, 00:18:25.398 "data_size": 65536 00:18:25.398 }, 00:18:25.398 { 00:18:25.398 "name": "BaseBdev4", 00:18:25.398 "uuid": "41632219-ded3-55c8-847e-2818b089b80c", 00:18:25.398 "is_configured": true, 00:18:25.398 "data_offset": 0, 00:18:25.398 "data_size": 65536 00:18:25.398 } 00:18:25.398 ] 00:18:25.398 }' 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.398 "name": "raid_bdev1", 00:18:25.398 "uuid": "02d55d5e-4cbb-49a2-9ca2-23ffca8678e2", 00:18:25.398 "strip_size_kb": 0, 00:18:25.398 "state": "online", 00:18:25.398 "raid_level": "raid1", 00:18:25.398 "superblock": false, 00:18:25.398 "num_base_bdevs": 4, 00:18:25.398 "num_base_bdevs_discovered": 3, 00:18:25.398 "num_base_bdevs_operational": 3, 00:18:25.398 "base_bdevs_list": [ 00:18:25.398 { 00:18:25.398 "name": "spare", 00:18:25.398 "uuid": "f2e14908-7e09-51e8-a198-b4abeb515929", 00:18:25.398 "is_configured": true, 00:18:25.398 "data_offset": 0, 00:18:25.398 "data_size": 65536 00:18:25.398 }, 00:18:25.398 { 00:18:25.398 "name": null, 00:18:25.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.398 "is_configured": false, 00:18:25.398 "data_offset": 0, 00:18:25.398 "data_size": 65536 00:18:25.398 }, 00:18:25.398 { 00:18:25.398 "name": "BaseBdev3", 00:18:25.398 "uuid": "70157a35-2adf-57ee-89a1-6c3e4457eebc", 00:18:25.398 "is_configured": true, 00:18:25.398 "data_offset": 0, 00:18:25.398 "data_size": 65536 00:18:25.398 }, 00:18:25.398 { 00:18:25.398 "name": "BaseBdev4", 00:18:25.398 "uuid": "41632219-ded3-55c8-847e-2818b089b80c", 00:18:25.398 "is_configured": true, 00:18:25.398 "data_offset": 0, 00:18:25.398 "data_size": 65536 00:18:25.398 } 00:18:25.398 ] 00:18:25.398 }' 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.398 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.657 07:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.657 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.657 "name": "raid_bdev1", 00:18:25.657 "uuid": "02d55d5e-4cbb-49a2-9ca2-23ffca8678e2", 00:18:25.657 "strip_size_kb": 0, 00:18:25.657 "state": "online", 00:18:25.657 "raid_level": "raid1", 00:18:25.657 "superblock": false, 00:18:25.657 "num_base_bdevs": 4, 00:18:25.657 "num_base_bdevs_discovered": 3, 00:18:25.657 "num_base_bdevs_operational": 3, 00:18:25.657 "base_bdevs_list": [ 00:18:25.657 { 00:18:25.657 "name": "spare", 00:18:25.657 "uuid": "f2e14908-7e09-51e8-a198-b4abeb515929", 00:18:25.657 "is_configured": true, 00:18:25.657 "data_offset": 0, 00:18:25.657 "data_size": 65536 00:18:25.657 }, 00:18:25.657 { 00:18:25.657 "name": null, 00:18:25.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.657 "is_configured": false, 00:18:25.657 "data_offset": 0, 00:18:25.657 "data_size": 65536 00:18:25.657 }, 00:18:25.657 { 00:18:25.657 "name": "BaseBdev3", 00:18:25.657 "uuid": "70157a35-2adf-57ee-89a1-6c3e4457eebc", 00:18:25.657 "is_configured": true, 00:18:25.657 "data_offset": 0, 00:18:25.657 "data_size": 65536 00:18:25.657 }, 00:18:25.657 { 00:18:25.657 "name": "BaseBdev4", 00:18:25.657 "uuid": "41632219-ded3-55c8-847e-2818b089b80c", 00:18:25.657 "is_configured": true, 00:18:25.657 "data_offset": 0, 00:18:25.657 "data_size": 65536 00:18:25.657 } 00:18:25.657 ] 00:18:25.657 }' 00:18:25.657 07:15:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.657 07:15:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.916 76.62 IOPS, 229.88 MiB/s [2024-11-20T07:15:23.236Z] 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:25.916 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.916 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:25.916 [2024-11-20 07:15:23.213955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:25.916 [2024-11-20 07:15:23.213990] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.175 00:18:26.175 Latency(us) 00:18:26.175 [2024-11-20T07:15:23.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.175 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:26.175 raid_bdev1 : 8.45 73.99 221.98 0.00 0.00 18632.31 309.06 123922.62 00:18:26.175 [2024-11-20T07:15:23.495Z] =================================================================================================================== 00:18:26.175 [2024-11-20T07:15:23.495Z] Total : 73.99 221.98 0.00 0.00 18632.31 309.06 123922.62 00:18:26.175 { 00:18:26.175 "results": [ 00:18:26.175 { 00:18:26.175 "job": "raid_bdev1", 00:18:26.175 "core_mask": "0x1", 00:18:26.175 "workload": "randrw", 00:18:26.175 "percentage": 50, 00:18:26.175 "status": "finished", 00:18:26.175 "queue_depth": 2, 00:18:26.175 "io_size": 3145728, 00:18:26.175 "runtime": 8.44676, 00:18:26.175 "iops": 73.99286827138454, 00:18:26.175 "mibps": 221.9786048141536, 00:18:26.175 "io_failed": 0, 00:18:26.175 "io_timeout": 0, 00:18:26.175 "avg_latency_us": 18632.312273454547, 00:18:26.175 "min_latency_us": 309.0618181818182, 00:18:26.175 "max_latency_us": 123922.61818181818 00:18:26.175 } 00:18:26.175 ], 00:18:26.175 "core_count": 1 00:18:26.175 } 00:18:26.175 [2024-11-20 07:15:23.261996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.175 [2024-11-20 07:15:23.262059] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.175 [2024-11-20 07:15:23.262205] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.175 [2024-11-20 07:15:23.262222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:26.175 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:26.434 /dev/nbd0 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:26.434 1+0 records in 00:18:26.434 1+0 records out 00:18:26.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331682 s, 12.3 MB/s 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:26.434 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:26.435 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:26.435 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:26.435 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:26.435 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:26.435 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:26.435 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:26.435 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:26.435 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:26.435 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:26.435 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:26.435 07:15:23 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:27.002 /dev/nbd1 00:18:27.002 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:27.002 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:27.002 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:27.002 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:27.002 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:27.003 1+0 records in 00:18:27.003 1+0 records out 00:18:27.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615855 s, 6.7 MB/s 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.003 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:27.262 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:27.520 /dev/nbd1 00:18:27.520 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:27.778 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:27.778 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:27.778 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:18:27.778 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:27.778 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:27.778 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:27.778 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:18:27.778 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:27.778 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:27.779 1+0 records in 00:18:27.779 1+0 records out 00:18:27.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403102 s, 10.2 MB/s 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:27.779 07:15:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:28.038 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78984 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78984 ']' 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78984 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78984 00:18:28.296 killing process with pid 78984 00:18:28.296 Received shutdown signal, test time was about 10.721425 seconds 00:18:28.296 00:18:28.296 Latency(us) 00:18:28.296 [2024-11-20T07:15:25.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.296 [2024-11-20T07:15:25.616Z] =================================================================================================================== 00:18:28.296 [2024-11-20T07:15:25.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78984' 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78984 00:18:28.296 [2024-11-20 07:15:25.516487] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:28.296 07:15:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78984 00:18:28.916 [2024-11-20 07:15:25.900224] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:29.849 07:15:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:29.849 00:18:29.849 real 0m14.346s 00:18:29.849 user 0m19.021s 00:18:29.849 sys 0m1.725s 00:18:29.849 07:15:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.849 07:15:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.849 ************************************ 00:18:29.849 END TEST raid_rebuild_test_io 00:18:29.849 ************************************ 00:18:29.849 07:15:27 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:18:29.849 07:15:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:29.849 07:15:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.849 07:15:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:29.849 ************************************ 00:18:29.849 START TEST raid_rebuild_test_sb_io 00:18:29.849 ************************************ 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79404 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79404 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79404 ']' 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.849 07:15:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:29.849 [2024-11-20 07:15:27.140437] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:18:29.849 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:29.849 Zero copy mechanism will not be used. 00:18:29.849 [2024-11-20 07:15:27.140601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79404 ] 00:18:30.108 [2024-11-20 07:15:27.313506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.366 [2024-11-20 07:15:27.445279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.366 [2024-11-20 07:15:27.647561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.366 [2024-11-20 07:15:27.647669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.935 BaseBdev1_malloc 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:30.935 [2024-11-20 07:15:28.209229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:30.935 [2024-11-20 07:15:28.209323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.935 [2024-11-20 07:15:28.209356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:30.935 [2024-11-20 07:15:28.209375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.935 [2024-11-20 07:15:28.212233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.935 [2024-11-20 07:15:28.212289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:30.935 BaseBdev1 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.935 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.194 BaseBdev2_malloc 00:18:31.194 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.194 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:31.194 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.194 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.194 [2024-11-20 07:15:28.265942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:31.194 [2024-11-20 07:15:28.266025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.194 [2024-11-20 07:15:28.266052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:31.194 [2024-11-20 07:15:28.266073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.194 [2024-11-20 07:15:28.268920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.194 [2024-11-20 07:15:28.268969] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:31.194 BaseBdev2 00:18:31.194 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.194 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:31.194 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:31.194 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.194 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.194 BaseBdev3_malloc 00:18:31.194 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.195 [2024-11-20 07:15:28.328234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:31.195 [2024-11-20 07:15:28.328322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.195 [2024-11-20 07:15:28.328355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:31.195 [2024-11-20 07:15:28.328374] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.195 [2024-11-20 07:15:28.331257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.195 [2024-11-20 07:15:28.331309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:31.195 BaseBdev3 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.195 BaseBdev4_malloc 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.195 [2024-11-20 07:15:28.380407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:31.195 [2024-11-20 07:15:28.380483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.195 [2024-11-20 07:15:28.380516] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:31.195 [2024-11-20 07:15:28.380534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.195 [2024-11-20 07:15:28.383276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.195 [2024-11-20 07:15:28.383331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:31.195 BaseBdev4 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.195 spare_malloc 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.195 spare_delay 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.195 [2024-11-20 07:15:28.441217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:31.195 [2024-11-20 07:15:28.441442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.195 [2024-11-20 07:15:28.441484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:31.195 [2024-11-20 07:15:28.441504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.195 [2024-11-20 07:15:28.444348] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.195 [2024-11-20 07:15:28.444402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:31.195 spare 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.195 [2024-11-20 07:15:28.453315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.195 [2024-11-20 07:15:28.456088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.195 [2024-11-20 07:15:28.456332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.195 [2024-11-20 07:15:28.456477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:31.195 [2024-11-20 07:15:28.456779] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:31.195 [2024-11-20 07:15:28.456849] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:31.195 [2024-11-20 07:15:28.457349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:31.195 [2024-11-20 07:15:28.457736] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:31.195 [2024-11-20 07:15:28.457861] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:31.195 [2024-11-20 07:15:28.458299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.195 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.454 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.454 "name": "raid_bdev1", 00:18:31.454 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:31.454 "strip_size_kb": 0, 00:18:31.454 "state": "online", 00:18:31.454 "raid_level": "raid1", 00:18:31.454 "superblock": true, 00:18:31.454 "num_base_bdevs": 4, 00:18:31.454 "num_base_bdevs_discovered": 4, 00:18:31.454 "num_base_bdevs_operational": 4, 00:18:31.454 "base_bdevs_list": [ 00:18:31.454 { 00:18:31.454 "name": "BaseBdev1", 00:18:31.454 "uuid": "2ccb650b-1a12-55d2-905a-5e388234d084", 00:18:31.454 "is_configured": true, 00:18:31.454 "data_offset": 2048, 00:18:31.454 "data_size": 63488 00:18:31.454 }, 00:18:31.454 { 00:18:31.454 "name": "BaseBdev2", 00:18:31.454 "uuid": "360ed934-fb15-5593-9e6d-fe0fc575e3a3", 00:18:31.454 "is_configured": true, 00:18:31.454 "data_offset": 2048, 00:18:31.454 "data_size": 63488 00:18:31.454 }, 00:18:31.454 { 00:18:31.454 "name": "BaseBdev3", 00:18:31.454 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:31.454 "is_configured": true, 00:18:31.454 "data_offset": 2048, 00:18:31.454 "data_size": 63488 00:18:31.454 }, 00:18:31.454 { 00:18:31.454 "name": "BaseBdev4", 00:18:31.454 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:31.454 "is_configured": true, 00:18:31.454 "data_offset": 2048, 00:18:31.454 "data_size": 63488 00:18:31.454 } 00:18:31.454 ] 00:18:31.454 }' 00:18:31.454 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.454 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.712 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:31.712 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:31.712 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.712 07:15:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.712 [2024-11-20 07:15:28.982888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.712 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.971 [2024-11-20 07:15:29.094387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.971 "name": "raid_bdev1", 00:18:31.971 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:31.971 "strip_size_kb": 0, 00:18:31.971 "state": "online", 00:18:31.971 "raid_level": "raid1", 00:18:31.971 "superblock": true, 00:18:31.971 "num_base_bdevs": 4, 00:18:31.971 "num_base_bdevs_discovered": 3, 00:18:31.971 "num_base_bdevs_operational": 3, 00:18:31.971 "base_bdevs_list": [ 00:18:31.971 { 00:18:31.971 "name": null, 00:18:31.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.971 "is_configured": false, 00:18:31.971 "data_offset": 0, 00:18:31.971 "data_size": 63488 00:18:31.971 }, 00:18:31.971 { 00:18:31.971 "name": "BaseBdev2", 00:18:31.971 "uuid": "360ed934-fb15-5593-9e6d-fe0fc575e3a3", 00:18:31.971 "is_configured": true, 00:18:31.971 "data_offset": 2048, 00:18:31.971 "data_size": 63488 00:18:31.971 }, 00:18:31.971 { 00:18:31.971 "name": "BaseBdev3", 00:18:31.971 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:31.971 "is_configured": true, 00:18:31.971 "data_offset": 2048, 00:18:31.971 "data_size": 63488 00:18:31.971 }, 00:18:31.971 { 00:18:31.971 "name": "BaseBdev4", 00:18:31.971 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:31.971 "is_configured": true, 00:18:31.971 "data_offset": 2048, 00:18:31.971 "data_size": 63488 00:18:31.971 } 00:18:31.971 ] 00:18:31.971 }' 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.971 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:31.971 [2024-11-20 07:15:29.222822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:31.971 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:31.971 Zero copy mechanism will not be used. 00:18:31.971 Running I/O for 60 seconds... 00:18:32.584 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:32.584 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.584 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:32.584 [2024-11-20 07:15:29.611519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.584 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.584 07:15:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:32.584 [2024-11-20 07:15:29.680688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:18:32.584 [2024-11-20 07:15:29.683475] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:32.584 [2024-11-20 07:15:29.820632] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:32.843 [2024-11-20 07:15:29.945424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:32.843 [2024-11-20 07:15:29.946688] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:33.102 153.00 IOPS, 459.00 MiB/s [2024-11-20T07:15:30.422Z] [2024-11-20 07:15:30.365700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:33.361 [2024-11-20 07:15:30.598855] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:33.361 [2024-11-20 07:15:30.599515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:33.361 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.361 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.361 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.361 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.361 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.619 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.619 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.619 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.619 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.619 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.619 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.619 "name": "raid_bdev1", 00:18:33.619 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:33.619 "strip_size_kb": 0, 00:18:33.619 "state": "online", 00:18:33.619 "raid_level": "raid1", 00:18:33.619 "superblock": true, 00:18:33.619 "num_base_bdevs": 4, 00:18:33.619 "num_base_bdevs_discovered": 4, 00:18:33.619 "num_base_bdevs_operational": 4, 00:18:33.619 "process": { 00:18:33.619 "type": "rebuild", 00:18:33.619 "target": "spare", 00:18:33.619 "progress": { 00:18:33.619 "blocks": 10240, 00:18:33.619 "percent": 16 00:18:33.619 } 00:18:33.619 }, 00:18:33.619 "base_bdevs_list": [ 00:18:33.619 { 00:18:33.619 "name": "spare", 00:18:33.619 "uuid": "6a42d675-7c2a-5272-b6d7-aed86d868d11", 00:18:33.620 "is_configured": true, 00:18:33.620 "data_offset": 2048, 00:18:33.620 "data_size": 63488 00:18:33.620 }, 00:18:33.620 { 00:18:33.620 "name": "BaseBdev2", 00:18:33.620 "uuid": "360ed934-fb15-5593-9e6d-fe0fc575e3a3", 00:18:33.620 "is_configured": true, 00:18:33.620 "data_offset": 2048, 00:18:33.620 "data_size": 63488 00:18:33.620 }, 00:18:33.620 { 00:18:33.620 "name": "BaseBdev3", 00:18:33.620 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:33.620 "is_configured": true, 00:18:33.620 "data_offset": 2048, 00:18:33.620 "data_size": 63488 00:18:33.620 }, 00:18:33.620 { 00:18:33.620 "name": "BaseBdev4", 00:18:33.620 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:33.620 "is_configured": true, 00:18:33.620 "data_offset": 2048, 00:18:33.620 "data_size": 63488 00:18:33.620 } 00:18:33.620 ] 00:18:33.620 }' 00:18:33.620 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.620 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.620 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.620 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.620 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:33.620 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.620 07:15:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.620 [2024-11-20 07:15:30.833905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.878 [2024-11-20 07:15:30.947414] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:33.878 [2024-11-20 07:15:30.959625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.878 [2024-11-20 07:15:30.959683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.878 [2024-11-20 07:15:30.959724] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:33.878 [2024-11-20 07:15:31.000515] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:18:33.878 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.878 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:33.878 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.878 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.878 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.878 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.878 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:33.878 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.878 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.878 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.878 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.878 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.879 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.879 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.879 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:33.879 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.879 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.879 "name": "raid_bdev1", 00:18:33.879 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:33.879 "strip_size_kb": 0, 00:18:33.879 "state": "online", 00:18:33.879 "raid_level": "raid1", 00:18:33.879 "superblock": true, 00:18:33.879 "num_base_bdevs": 4, 00:18:33.879 "num_base_bdevs_discovered": 3, 00:18:33.879 "num_base_bdevs_operational": 3, 00:18:33.879 "base_bdevs_list": [ 00:18:33.879 { 00:18:33.879 "name": null, 00:18:33.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.879 "is_configured": false, 00:18:33.879 "data_offset": 0, 00:18:33.879 "data_size": 63488 00:18:33.879 }, 00:18:33.879 { 00:18:33.879 "name": "BaseBdev2", 00:18:33.879 "uuid": "360ed934-fb15-5593-9e6d-fe0fc575e3a3", 00:18:33.879 "is_configured": true, 00:18:33.879 "data_offset": 2048, 00:18:33.879 "data_size": 63488 00:18:33.879 }, 00:18:33.879 { 00:18:33.879 "name": "BaseBdev3", 00:18:33.879 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:33.879 "is_configured": true, 00:18:33.879 "data_offset": 2048, 00:18:33.879 "data_size": 63488 00:18:33.879 }, 00:18:33.879 { 00:18:33.879 "name": "BaseBdev4", 00:18:33.879 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:33.879 "is_configured": true, 00:18:33.879 "data_offset": 2048, 00:18:33.879 "data_size": 63488 00:18:33.879 } 00:18:33.879 ] 00:18:33.879 }' 00:18:33.879 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.879 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.398 119.50 IOPS, 358.50 MiB/s [2024-11-20T07:15:31.718Z] 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.398 "name": "raid_bdev1", 00:18:34.398 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:34.398 "strip_size_kb": 0, 00:18:34.398 "state": "online", 00:18:34.398 "raid_level": "raid1", 00:18:34.398 "superblock": true, 00:18:34.398 "num_base_bdevs": 4, 00:18:34.398 "num_base_bdevs_discovered": 3, 00:18:34.398 "num_base_bdevs_operational": 3, 00:18:34.398 "base_bdevs_list": [ 00:18:34.398 { 00:18:34.398 "name": null, 00:18:34.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.398 "is_configured": false, 00:18:34.398 "data_offset": 0, 00:18:34.398 "data_size": 63488 00:18:34.398 }, 00:18:34.398 { 00:18:34.398 "name": "BaseBdev2", 00:18:34.398 "uuid": "360ed934-fb15-5593-9e6d-fe0fc575e3a3", 00:18:34.398 "is_configured": true, 00:18:34.398 "data_offset": 2048, 00:18:34.398 "data_size": 63488 00:18:34.398 }, 00:18:34.398 { 00:18:34.398 "name": "BaseBdev3", 00:18:34.398 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:34.398 "is_configured": true, 00:18:34.398 "data_offset": 2048, 00:18:34.398 "data_size": 63488 00:18:34.398 }, 00:18:34.398 { 00:18:34.398 "name": "BaseBdev4", 00:18:34.398 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:34.398 "is_configured": true, 00:18:34.398 "data_offset": 2048, 00:18:34.398 "data_size": 63488 00:18:34.398 } 00:18:34.398 ] 00:18:34.398 }' 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.398 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:34.398 [2024-11-20 07:15:31.669848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:34.657 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.658 07:15:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:34.658 [2024-11-20 07:15:31.765699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:34.658 [2024-11-20 07:15:31.768425] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:34.658 [2024-11-20 07:15:31.880407] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:34.658 [2024-11-20 07:15:31.882057] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:18:34.917 [2024-11-20 07:15:32.145168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:34.917 [2024-11-20 07:15:32.145564] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:18:35.175 128.00 IOPS, 384.00 MiB/s [2024-11-20T07:15:32.495Z] [2024-11-20 07:15:32.396705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:18:35.434 [2024-11-20 07:15:32.630072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:18:35.434 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.434 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.434 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.434 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.434 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.693 "name": "raid_bdev1", 00:18:35.693 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:35.693 "strip_size_kb": 0, 00:18:35.693 "state": "online", 00:18:35.693 "raid_level": "raid1", 00:18:35.693 "superblock": true, 00:18:35.693 "num_base_bdevs": 4, 00:18:35.693 "num_base_bdevs_discovered": 4, 00:18:35.693 "num_base_bdevs_operational": 4, 00:18:35.693 "process": { 00:18:35.693 "type": "rebuild", 00:18:35.693 "target": "spare", 00:18:35.693 "progress": { 00:18:35.693 "blocks": 10240, 00:18:35.693 "percent": 16 00:18:35.693 } 00:18:35.693 }, 00:18:35.693 "base_bdevs_list": [ 00:18:35.693 { 00:18:35.693 "name": "spare", 00:18:35.693 "uuid": "6a42d675-7c2a-5272-b6d7-aed86d868d11", 00:18:35.693 "is_configured": true, 00:18:35.693 "data_offset": 2048, 00:18:35.693 "data_size": 63488 00:18:35.693 }, 00:18:35.693 { 00:18:35.693 "name": "BaseBdev2", 00:18:35.693 "uuid": "360ed934-fb15-5593-9e6d-fe0fc575e3a3", 00:18:35.693 "is_configured": true, 00:18:35.693 "data_offset": 2048, 00:18:35.693 "data_size": 63488 00:18:35.693 }, 00:18:35.693 { 00:18:35.693 "name": "BaseBdev3", 00:18:35.693 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:35.693 "is_configured": true, 00:18:35.693 "data_offset": 2048, 00:18:35.693 "data_size": 63488 00:18:35.693 }, 00:18:35.693 { 00:18:35.693 "name": "BaseBdev4", 00:18:35.693 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:35.693 "is_configured": true, 00:18:35.693 "data_offset": 2048, 00:18:35.693 "data_size": 63488 00:18:35.693 } 00:18:35.693 ] 00:18:35.693 }' 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.693 [2024-11-20 07:15:32.899431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:35.693 [2024-11-20 07:15:32.901083] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:35.693 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.693 07:15:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:35.693 [2024-11-20 07:15:32.907922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:35.953 [2024-11-20 07:15:33.141555] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:18:36.212 106.00 IOPS, 318.00 MiB/s [2024-11-20T07:15:33.532Z] [2024-11-20 07:15:33.344755] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:18:36.212 [2024-11-20 07:15:33.344827] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.212 "name": "raid_bdev1", 00:18:36.212 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:36.212 "strip_size_kb": 0, 00:18:36.212 "state": "online", 00:18:36.212 "raid_level": "raid1", 00:18:36.212 "superblock": true, 00:18:36.212 "num_base_bdevs": 4, 00:18:36.212 "num_base_bdevs_discovered": 3, 00:18:36.212 "num_base_bdevs_operational": 3, 00:18:36.212 "process": { 00:18:36.212 "type": "rebuild", 00:18:36.212 "target": "spare", 00:18:36.212 "progress": { 00:18:36.212 "blocks": 16384, 00:18:36.212 "percent": 25 00:18:36.212 } 00:18:36.212 }, 00:18:36.212 "base_bdevs_list": [ 00:18:36.212 { 00:18:36.212 "name": "spare", 00:18:36.212 "uuid": "6a42d675-7c2a-5272-b6d7-aed86d868d11", 00:18:36.212 "is_configured": true, 00:18:36.212 "data_offset": 2048, 00:18:36.212 "data_size": 63488 00:18:36.212 }, 00:18:36.212 { 00:18:36.212 "name": null, 00:18:36.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.212 "is_configured": false, 00:18:36.212 "data_offset": 0, 00:18:36.212 "data_size": 63488 00:18:36.212 }, 00:18:36.212 { 00:18:36.212 "name": "BaseBdev3", 00:18:36.212 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:36.212 "is_configured": true, 00:18:36.212 "data_offset": 2048, 00:18:36.212 "data_size": 63488 00:18:36.212 }, 00:18:36.212 { 00:18:36.212 "name": "BaseBdev4", 00:18:36.212 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:36.212 "is_configured": true, 00:18:36.212 "data_offset": 2048, 00:18:36.212 "data_size": 63488 00:18:36.212 } 00:18:36.212 ] 00:18:36.212 }' 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.212 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=538 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.472 "name": "raid_bdev1", 00:18:36.472 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:36.472 "strip_size_kb": 0, 00:18:36.472 "state": "online", 00:18:36.472 "raid_level": "raid1", 00:18:36.472 "superblock": true, 00:18:36.472 "num_base_bdevs": 4, 00:18:36.472 "num_base_bdevs_discovered": 3, 00:18:36.472 "num_base_bdevs_operational": 3, 00:18:36.472 "process": { 00:18:36.472 "type": "rebuild", 00:18:36.472 "target": "spare", 00:18:36.472 "progress": { 00:18:36.472 "blocks": 18432, 00:18:36.472 "percent": 29 00:18:36.472 } 00:18:36.472 }, 00:18:36.472 "base_bdevs_list": [ 00:18:36.472 { 00:18:36.472 "name": "spare", 00:18:36.472 "uuid": "6a42d675-7c2a-5272-b6d7-aed86d868d11", 00:18:36.472 "is_configured": true, 00:18:36.472 "data_offset": 2048, 00:18:36.472 "data_size": 63488 00:18:36.472 }, 00:18:36.472 { 00:18:36.472 "name": null, 00:18:36.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.472 "is_configured": false, 00:18:36.472 "data_offset": 0, 00:18:36.472 "data_size": 63488 00:18:36.472 }, 00:18:36.472 { 00:18:36.472 "name": "BaseBdev3", 00:18:36.472 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:36.472 "is_configured": true, 00:18:36.472 "data_offset": 2048, 00:18:36.472 "data_size": 63488 00:18:36.472 }, 00:18:36.472 { 00:18:36.472 "name": "BaseBdev4", 00:18:36.472 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:36.472 "is_configured": true, 00:18:36.472 "data_offset": 2048, 00:18:36.472 "data_size": 63488 00:18:36.472 } 00:18:36.472 ] 00:18:36.472 }' 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.472 [2024-11-20 07:15:33.597844] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:36.472 [2024-11-20 07:15:33.598422] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.472 07:15:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:36.472 [2024-11-20 07:15:33.747104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:18:37.040 [2024-11-20 07:15:34.080640] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:18:37.299 95.60 IOPS, 286.80 MiB/s [2024-11-20T07:15:34.619Z] [2024-11-20 07:15:34.525224] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.558 "name": "raid_bdev1", 00:18:37.558 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:37.558 "strip_size_kb": 0, 00:18:37.558 "state": "online", 00:18:37.558 "raid_level": "raid1", 00:18:37.558 "superblock": true, 00:18:37.558 "num_base_bdevs": 4, 00:18:37.558 "num_base_bdevs_discovered": 3, 00:18:37.558 "num_base_bdevs_operational": 3, 00:18:37.558 "process": { 00:18:37.558 "type": "rebuild", 00:18:37.558 "target": "spare", 00:18:37.558 "progress": { 00:18:37.558 "blocks": 36864, 00:18:37.558 "percent": 58 00:18:37.558 } 00:18:37.558 }, 00:18:37.558 "base_bdevs_list": [ 00:18:37.558 { 00:18:37.558 "name": "spare", 00:18:37.558 "uuid": "6a42d675-7c2a-5272-b6d7-aed86d868d11", 00:18:37.558 "is_configured": true, 00:18:37.558 "data_offset": 2048, 00:18:37.558 "data_size": 63488 00:18:37.558 }, 00:18:37.558 { 00:18:37.558 "name": null, 00:18:37.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.558 "is_configured": false, 00:18:37.558 "data_offset": 0, 00:18:37.558 "data_size": 63488 00:18:37.558 }, 00:18:37.558 { 00:18:37.558 "name": "BaseBdev3", 00:18:37.558 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:37.558 "is_configured": true, 00:18:37.558 "data_offset": 2048, 00:18:37.558 "data_size": 63488 00:18:37.558 }, 00:18:37.558 { 00:18:37.558 "name": "BaseBdev4", 00:18:37.558 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:37.558 "is_configured": true, 00:18:37.558 "data_offset": 2048, 00:18:37.558 "data_size": 63488 00:18:37.558 } 00:18:37.558 ] 00:18:37.558 }' 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.558 07:15:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:38.692 86.17 IOPS, 258.50 MiB/s [2024-11-20T07:15:36.012Z] 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.692 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.692 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.692 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.692 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.692 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.692 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.692 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.692 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.692 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:38.692 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.692 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.692 "name": "raid_bdev1", 00:18:38.692 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:38.692 "strip_size_kb": 0, 00:18:38.692 "state": "online", 00:18:38.692 "raid_level": "raid1", 00:18:38.692 "superblock": true, 00:18:38.692 "num_base_bdevs": 4, 00:18:38.692 "num_base_bdevs_discovered": 3, 00:18:38.692 "num_base_bdevs_operational": 3, 00:18:38.692 "process": { 00:18:38.692 "type": "rebuild", 00:18:38.692 "target": "spare", 00:18:38.692 "progress": { 00:18:38.692 "blocks": 57344, 00:18:38.692 "percent": 90 00:18:38.692 } 00:18:38.692 }, 00:18:38.692 "base_bdevs_list": [ 00:18:38.692 { 00:18:38.692 "name": "spare", 00:18:38.692 "uuid": "6a42d675-7c2a-5272-b6d7-aed86d868d11", 00:18:38.692 "is_configured": true, 00:18:38.692 "data_offset": 2048, 00:18:38.692 "data_size": 63488 00:18:38.692 }, 00:18:38.692 { 00:18:38.692 "name": null, 00:18:38.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.692 "is_configured": false, 00:18:38.692 "data_offset": 0, 00:18:38.692 "data_size": 63488 00:18:38.692 }, 00:18:38.692 { 00:18:38.692 "name": "BaseBdev3", 00:18:38.692 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:38.692 "is_configured": true, 00:18:38.692 "data_offset": 2048, 00:18:38.692 "data_size": 63488 00:18:38.692 }, 00:18:38.692 { 00:18:38.692 "name": "BaseBdev4", 00:18:38.692 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:38.693 "is_configured": true, 00:18:38.693 "data_offset": 2048, 00:18:38.693 "data_size": 63488 00:18:38.693 } 00:18:38.693 ] 00:18:38.693 }' 00:18:38.693 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.693 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:38.693 07:15:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.952 07:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:38.952 07:15:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:38.952 [2024-11-20 07:15:36.113833] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:38.952 [2024-11-20 07:15:36.221712] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:38.952 [2024-11-20 07:15:36.225593] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.891 78.43 IOPS, 235.29 MiB/s [2024-11-20T07:15:37.211Z] 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:39.891 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:39.891 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.891 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:39.891 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:39.891 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.891 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.891 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.891 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.891 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.891 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.891 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.891 "name": "raid_bdev1", 00:18:39.891 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:39.891 "strip_size_kb": 0, 00:18:39.891 "state": "online", 00:18:39.891 "raid_level": "raid1", 00:18:39.891 "superblock": true, 00:18:39.891 "num_base_bdevs": 4, 00:18:39.891 "num_base_bdevs_discovered": 3, 00:18:39.891 "num_base_bdevs_operational": 3, 00:18:39.891 "base_bdevs_list": [ 00:18:39.891 { 00:18:39.891 "name": "spare", 00:18:39.891 "uuid": "6a42d675-7c2a-5272-b6d7-aed86d868d11", 00:18:39.891 "is_configured": true, 00:18:39.891 "data_offset": 2048, 00:18:39.891 "data_size": 63488 00:18:39.891 }, 00:18:39.891 { 00:18:39.891 "name": null, 00:18:39.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.891 "is_configured": false, 00:18:39.891 "data_offset": 0, 00:18:39.891 "data_size": 63488 00:18:39.891 }, 00:18:39.891 { 00:18:39.891 "name": "BaseBdev3", 00:18:39.891 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:39.891 "is_configured": true, 00:18:39.891 "data_offset": 2048, 00:18:39.891 "data_size": 63488 00:18:39.891 }, 00:18:39.891 { 00:18:39.891 "name": "BaseBdev4", 00:18:39.892 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:39.892 "is_configured": true, 00:18:39.892 "data_offset": 2048, 00:18:39.892 "data_size": 63488 00:18:39.892 } 00:18:39.892 ] 00:18:39.892 }' 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:39.892 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.152 72.88 IOPS, 218.62 MiB/s [2024-11-20T07:15:37.472Z] 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.152 "name": "raid_bdev1", 00:18:40.152 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:40.152 "strip_size_kb": 0, 00:18:40.152 "state": "online", 00:18:40.152 "raid_level": "raid1", 00:18:40.152 "superblock": true, 00:18:40.152 "num_base_bdevs": 4, 00:18:40.152 "num_base_bdevs_discovered": 3, 00:18:40.152 "num_base_bdevs_operational": 3, 00:18:40.152 "base_bdevs_list": [ 00:18:40.152 { 00:18:40.152 "name": "spare", 00:18:40.152 "uuid": "6a42d675-7c2a-5272-b6d7-aed86d868d11", 00:18:40.152 "is_configured": true, 00:18:40.152 "data_offset": 2048, 00:18:40.152 "data_size": 63488 00:18:40.152 }, 00:18:40.152 { 00:18:40.152 "name": null, 00:18:40.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.152 "is_configured": false, 00:18:40.152 "data_offset": 0, 00:18:40.152 "data_size": 63488 00:18:40.152 }, 00:18:40.152 { 00:18:40.152 "name": "BaseBdev3", 00:18:40.152 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:40.152 "is_configured": true, 00:18:40.152 "data_offset": 2048, 00:18:40.152 "data_size": 63488 00:18:40.152 }, 00:18:40.152 { 00:18:40.152 "name": "BaseBdev4", 00:18:40.152 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:40.152 "is_configured": true, 00:18:40.152 "data_offset": 2048, 00:18:40.152 "data_size": 63488 00:18:40.152 } 00:18:40.152 ] 00:18:40.152 }' 00:18:40.152 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.152 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:40.152 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.152 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:40.152 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:40.152 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.153 "name": "raid_bdev1", 00:18:40.153 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:40.153 "strip_size_kb": 0, 00:18:40.153 "state": "online", 00:18:40.153 "raid_level": "raid1", 00:18:40.153 "superblock": true, 00:18:40.153 "num_base_bdevs": 4, 00:18:40.153 "num_base_bdevs_discovered": 3, 00:18:40.153 "num_base_bdevs_operational": 3, 00:18:40.153 "base_bdevs_list": [ 00:18:40.153 { 00:18:40.153 "name": "spare", 00:18:40.153 "uuid": "6a42d675-7c2a-5272-b6d7-aed86d868d11", 00:18:40.153 "is_configured": true, 00:18:40.153 "data_offset": 2048, 00:18:40.153 "data_size": 63488 00:18:40.153 }, 00:18:40.153 { 00:18:40.153 "name": null, 00:18:40.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.153 "is_configured": false, 00:18:40.153 "data_offset": 0, 00:18:40.153 "data_size": 63488 00:18:40.153 }, 00:18:40.153 { 00:18:40.153 "name": "BaseBdev3", 00:18:40.153 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:40.153 "is_configured": true, 00:18:40.153 "data_offset": 2048, 00:18:40.153 "data_size": 63488 00:18:40.153 }, 00:18:40.153 { 00:18:40.153 "name": "BaseBdev4", 00:18:40.153 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:40.153 "is_configured": true, 00:18:40.153 "data_offset": 2048, 00:18:40.153 "data_size": 63488 00:18:40.153 } 00:18:40.153 ] 00:18:40.153 }' 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.153 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:40.806 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:40.806 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.806 07:15:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:40.806 [2024-11-20 07:15:37.880271] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.806 [2024-11-20 07:15:37.880460] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.806 00:18:40.806 Latency(us) 00:18:40.806 [2024-11-20T07:15:38.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.806 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:18:40.806 raid_bdev1 : 8.75 68.65 205.96 0.00 0.00 20480.73 296.03 123922.62 00:18:40.806 [2024-11-20T07:15:38.126Z] =================================================================================================================== 00:18:40.806 [2024-11-20T07:15:38.126Z] Total : 68.65 205.96 0.00 0.00 20480.73 296.03 123922.62 00:18:40.806 [2024-11-20 07:15:38.000382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:40.806 [2024-11-20 07:15:38.000453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.806 [2024-11-20 07:15:38.000596] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.806 [2024-11-20 07:15:38.000621] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:40.806 { 00:18:40.806 "results": [ 00:18:40.806 { 00:18:40.806 "job": "raid_bdev1", 00:18:40.806 "core_mask": "0x1", 00:18:40.806 "workload": "randrw", 00:18:40.806 "percentage": 50, 00:18:40.806 "status": "finished", 00:18:40.806 "queue_depth": 2, 00:18:40.806 "io_size": 3145728, 00:18:40.806 "runtime": 8.754269, 00:18:40.806 "iops": 68.65221984839626, 00:18:40.806 "mibps": 205.95665954518876, 00:18:40.806 "io_failed": 0, 00:18:40.806 "io_timeout": 0, 00:18:40.806 "avg_latency_us": 20480.73264559068, 00:18:40.806 "min_latency_us": 296.0290909090909, 00:18:40.806 "max_latency_us": 123922.61818181818 00:18:40.806 } 00:18:40.806 ], 00:18:40.806 "core_count": 1 00:18:40.806 } 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:40.806 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:18:41.065 /dev/nbd0 00:18:41.065 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:41.065 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:41.065 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:41.065 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:41.065 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:41.065 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:41.065 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:41.325 1+0 records in 00:18:41.325 1+0 records out 00:18:41.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575875 s, 7.1 MB/s 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:41.325 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:18:41.584 /dev/nbd1 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:41.584 1+0 records in 00:18:41.584 1+0 records out 00:18:41.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219381 s, 18.7 MB/s 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:41.584 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:41.843 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:41.843 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:41.843 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:41.843 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:41.843 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:41.844 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:41.844 07:15:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:42.102 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:18:42.360 /dev/nbd1 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:42.360 1+0 records in 00:18:42.360 1+0 records out 00:18:42.360 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00096383 s, 4.2 MB/s 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:42.360 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:42.619 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:18:42.619 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:42.619 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:18:42.619 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:42.619 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:42.619 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:42.619 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:42.878 07:15:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.137 [2024-11-20 07:15:40.266703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:43.137 [2024-11-20 07:15:40.266785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.137 [2024-11-20 07:15:40.266817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:43.137 [2024-11-20 07:15:40.266836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.137 [2024-11-20 07:15:40.269809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.137 [2024-11-20 07:15:40.269862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:43.137 [2024-11-20 07:15:40.270005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:43.137 [2024-11-20 07:15:40.270084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:43.137 [2024-11-20 07:15:40.270259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:43.137 [2024-11-20 07:15:40.270415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:43.137 spare 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.137 [2024-11-20 07:15:40.370568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:43.137 [2024-11-20 07:15:40.370640] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:43.137 [2024-11-20 07:15:40.371143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:18:43.137 [2024-11-20 07:15:40.371428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:43.137 [2024-11-20 07:15:40.371457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:43.137 [2024-11-20 07:15:40.371855] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.137 "name": "raid_bdev1", 00:18:43.137 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:43.137 "strip_size_kb": 0, 00:18:43.137 "state": "online", 00:18:43.137 "raid_level": "raid1", 00:18:43.137 "superblock": true, 00:18:43.137 "num_base_bdevs": 4, 00:18:43.137 "num_base_bdevs_discovered": 3, 00:18:43.137 "num_base_bdevs_operational": 3, 00:18:43.137 "base_bdevs_list": [ 00:18:43.137 { 00:18:43.137 "name": "spare", 00:18:43.137 "uuid": "6a42d675-7c2a-5272-b6d7-aed86d868d11", 00:18:43.137 "is_configured": true, 00:18:43.137 "data_offset": 2048, 00:18:43.137 "data_size": 63488 00:18:43.137 }, 00:18:43.137 { 00:18:43.137 "name": null, 00:18:43.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.137 "is_configured": false, 00:18:43.137 "data_offset": 2048, 00:18:43.137 "data_size": 63488 00:18:43.137 }, 00:18:43.137 { 00:18:43.137 "name": "BaseBdev3", 00:18:43.137 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:43.137 "is_configured": true, 00:18:43.137 "data_offset": 2048, 00:18:43.137 "data_size": 63488 00:18:43.137 }, 00:18:43.137 { 00:18:43.137 "name": "BaseBdev4", 00:18:43.137 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:43.137 "is_configured": true, 00:18:43.137 "data_offset": 2048, 00:18:43.137 "data_size": 63488 00:18:43.137 } 00:18:43.137 ] 00:18:43.137 }' 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.137 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.705 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.705 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.705 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.705 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.705 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.705 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.705 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.705 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.705 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.705 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.705 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.705 "name": "raid_bdev1", 00:18:43.705 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:43.705 "strip_size_kb": 0, 00:18:43.705 "state": "online", 00:18:43.705 "raid_level": "raid1", 00:18:43.705 "superblock": true, 00:18:43.705 "num_base_bdevs": 4, 00:18:43.705 "num_base_bdevs_discovered": 3, 00:18:43.705 "num_base_bdevs_operational": 3, 00:18:43.705 "base_bdevs_list": [ 00:18:43.705 { 00:18:43.705 "name": "spare", 00:18:43.705 "uuid": "6a42d675-7c2a-5272-b6d7-aed86d868d11", 00:18:43.705 "is_configured": true, 00:18:43.705 "data_offset": 2048, 00:18:43.705 "data_size": 63488 00:18:43.705 }, 00:18:43.705 { 00:18:43.705 "name": null, 00:18:43.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.705 "is_configured": false, 00:18:43.705 "data_offset": 2048, 00:18:43.705 "data_size": 63488 00:18:43.705 }, 00:18:43.705 { 00:18:43.705 "name": "BaseBdev3", 00:18:43.705 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:43.705 "is_configured": true, 00:18:43.705 "data_offset": 2048, 00:18:43.705 "data_size": 63488 00:18:43.705 }, 00:18:43.705 { 00:18:43.705 "name": "BaseBdev4", 00:18:43.705 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:43.705 "is_configured": true, 00:18:43.705 "data_offset": 2048, 00:18:43.705 "data_size": 63488 00:18:43.705 } 00:18:43.705 ] 00:18:43.705 }' 00:18:43.705 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.705 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.706 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.706 07:15:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.706 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.706 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.706 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.706 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:43.706 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.965 [2024-11-20 07:15:41.059982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.965 "name": "raid_bdev1", 00:18:43.965 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:43.965 "strip_size_kb": 0, 00:18:43.965 "state": "online", 00:18:43.965 "raid_level": "raid1", 00:18:43.965 "superblock": true, 00:18:43.965 "num_base_bdevs": 4, 00:18:43.965 "num_base_bdevs_discovered": 2, 00:18:43.965 "num_base_bdevs_operational": 2, 00:18:43.965 "base_bdevs_list": [ 00:18:43.965 { 00:18:43.965 "name": null, 00:18:43.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.965 "is_configured": false, 00:18:43.965 "data_offset": 0, 00:18:43.965 "data_size": 63488 00:18:43.965 }, 00:18:43.965 { 00:18:43.965 "name": null, 00:18:43.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.965 "is_configured": false, 00:18:43.965 "data_offset": 2048, 00:18:43.965 "data_size": 63488 00:18:43.965 }, 00:18:43.965 { 00:18:43.965 "name": "BaseBdev3", 00:18:43.965 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:43.965 "is_configured": true, 00:18:43.965 "data_offset": 2048, 00:18:43.965 "data_size": 63488 00:18:43.965 }, 00:18:43.965 { 00:18:43.965 "name": "BaseBdev4", 00:18:43.965 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:43.965 "is_configured": true, 00:18:43.965 "data_offset": 2048, 00:18:43.965 "data_size": 63488 00:18:43.965 } 00:18:43.965 ] 00:18:43.965 }' 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.965 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.531 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:44.531 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.531 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:44.531 [2024-11-20 07:15:41.612306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:44.531 [2024-11-20 07:15:41.612569] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:44.531 [2024-11-20 07:15:41.612596] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:44.531 [2024-11-20 07:15:41.612661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:44.531 [2024-11-20 07:15:41.626591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:18:44.531 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.531 07:15:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:44.531 [2024-11-20 07:15:41.629184] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:45.463 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:45.463 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.463 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:45.463 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:45.463 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.464 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.464 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.464 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.464 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.464 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.464 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.464 "name": "raid_bdev1", 00:18:45.464 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:45.464 "strip_size_kb": 0, 00:18:45.464 "state": "online", 00:18:45.464 "raid_level": "raid1", 00:18:45.464 "superblock": true, 00:18:45.464 "num_base_bdevs": 4, 00:18:45.464 "num_base_bdevs_discovered": 3, 00:18:45.464 "num_base_bdevs_operational": 3, 00:18:45.464 "process": { 00:18:45.464 "type": "rebuild", 00:18:45.464 "target": "spare", 00:18:45.464 "progress": { 00:18:45.464 "blocks": 20480, 00:18:45.464 "percent": 32 00:18:45.464 } 00:18:45.464 }, 00:18:45.464 "base_bdevs_list": [ 00:18:45.464 { 00:18:45.464 "name": "spare", 00:18:45.464 "uuid": "6a42d675-7c2a-5272-b6d7-aed86d868d11", 00:18:45.464 "is_configured": true, 00:18:45.464 "data_offset": 2048, 00:18:45.464 "data_size": 63488 00:18:45.464 }, 00:18:45.464 { 00:18:45.464 "name": null, 00:18:45.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.464 "is_configured": false, 00:18:45.464 "data_offset": 2048, 00:18:45.464 "data_size": 63488 00:18:45.464 }, 00:18:45.464 { 00:18:45.464 "name": "BaseBdev3", 00:18:45.464 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:45.464 "is_configured": true, 00:18:45.464 "data_offset": 2048, 00:18:45.464 "data_size": 63488 00:18:45.464 }, 00:18:45.464 { 00:18:45.464 "name": "BaseBdev4", 00:18:45.464 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:45.464 "is_configured": true, 00:18:45.464 "data_offset": 2048, 00:18:45.464 "data_size": 63488 00:18:45.464 } 00:18:45.464 ] 00:18:45.464 }' 00:18:45.464 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.464 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:45.464 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.721 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:45.721 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:45.721 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.721 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.722 [2024-11-20 07:15:42.794675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:45.722 [2024-11-20 07:15:42.837990] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:45.722 [2024-11-20 07:15:42.838108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.722 [2024-11-20 07:15:42.838136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:45.722 [2024-11-20 07:15:42.838150] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.722 "name": "raid_bdev1", 00:18:45.722 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:45.722 "strip_size_kb": 0, 00:18:45.722 "state": "online", 00:18:45.722 "raid_level": "raid1", 00:18:45.722 "superblock": true, 00:18:45.722 "num_base_bdevs": 4, 00:18:45.722 "num_base_bdevs_discovered": 2, 00:18:45.722 "num_base_bdevs_operational": 2, 00:18:45.722 "base_bdevs_list": [ 00:18:45.722 { 00:18:45.722 "name": null, 00:18:45.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.722 "is_configured": false, 00:18:45.722 "data_offset": 0, 00:18:45.722 "data_size": 63488 00:18:45.722 }, 00:18:45.722 { 00:18:45.722 "name": null, 00:18:45.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.722 "is_configured": false, 00:18:45.722 "data_offset": 2048, 00:18:45.722 "data_size": 63488 00:18:45.722 }, 00:18:45.722 { 00:18:45.722 "name": "BaseBdev3", 00:18:45.722 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:45.722 "is_configured": true, 00:18:45.722 "data_offset": 2048, 00:18:45.722 "data_size": 63488 00:18:45.722 }, 00:18:45.722 { 00:18:45.722 "name": "BaseBdev4", 00:18:45.722 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:45.722 "is_configured": true, 00:18:45.722 "data_offset": 2048, 00:18:45.722 "data_size": 63488 00:18:45.722 } 00:18:45.722 ] 00:18:45.722 }' 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.722 07:15:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:46.286 07:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:46.286 07:15:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.286 07:15:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:46.286 [2024-11-20 07:15:43.361600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:46.286 [2024-11-20 07:15:43.361851] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.286 [2024-11-20 07:15:43.361904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:46.286 [2024-11-20 07:15:43.361928] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.286 [2024-11-20 07:15:43.362523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.286 [2024-11-20 07:15:43.362567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:46.286 [2024-11-20 07:15:43.362688] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:46.286 [2024-11-20 07:15:43.362713] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:18:46.286 [2024-11-20 07:15:43.362728] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:46.286 [2024-11-20 07:15:43.362765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:46.286 [2024-11-20 07:15:43.376694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:18:46.286 spare 00:18:46.286 07:15:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.286 07:15:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:46.286 [2024-11-20 07:15:43.379302] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.220 "name": "raid_bdev1", 00:18:47.220 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:47.220 "strip_size_kb": 0, 00:18:47.220 "state": "online", 00:18:47.220 "raid_level": "raid1", 00:18:47.220 "superblock": true, 00:18:47.220 "num_base_bdevs": 4, 00:18:47.220 "num_base_bdevs_discovered": 3, 00:18:47.220 "num_base_bdevs_operational": 3, 00:18:47.220 "process": { 00:18:47.220 "type": "rebuild", 00:18:47.220 "target": "spare", 00:18:47.220 "progress": { 00:18:47.220 "blocks": 20480, 00:18:47.220 "percent": 32 00:18:47.220 } 00:18:47.220 }, 00:18:47.220 "base_bdevs_list": [ 00:18:47.220 { 00:18:47.220 "name": "spare", 00:18:47.220 "uuid": "6a42d675-7c2a-5272-b6d7-aed86d868d11", 00:18:47.220 "is_configured": true, 00:18:47.220 "data_offset": 2048, 00:18:47.220 "data_size": 63488 00:18:47.220 }, 00:18:47.220 { 00:18:47.220 "name": null, 00:18:47.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.220 "is_configured": false, 00:18:47.220 "data_offset": 2048, 00:18:47.220 "data_size": 63488 00:18:47.220 }, 00:18:47.220 { 00:18:47.220 "name": "BaseBdev3", 00:18:47.220 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:47.220 "is_configured": true, 00:18:47.220 "data_offset": 2048, 00:18:47.220 "data_size": 63488 00:18:47.220 }, 00:18:47.220 { 00:18:47.220 "name": "BaseBdev4", 00:18:47.220 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:47.220 "is_configured": true, 00:18:47.220 "data_offset": 2048, 00:18:47.220 "data_size": 63488 00:18:47.220 } 00:18:47.220 ] 00:18:47.220 }' 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:47.220 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.479 [2024-11-20 07:15:44.544735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.479 [2024-11-20 07:15:44.588636] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:47.479 [2024-11-20 07:15:44.588936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.479 [2024-11-20 07:15:44.589209] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:47.479 [2024-11-20 07:15:44.589265] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.479 "name": "raid_bdev1", 00:18:47.479 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:47.479 "strip_size_kb": 0, 00:18:47.479 "state": "online", 00:18:47.479 "raid_level": "raid1", 00:18:47.479 "superblock": true, 00:18:47.479 "num_base_bdevs": 4, 00:18:47.479 "num_base_bdevs_discovered": 2, 00:18:47.479 "num_base_bdevs_operational": 2, 00:18:47.479 "base_bdevs_list": [ 00:18:47.479 { 00:18:47.479 "name": null, 00:18:47.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.479 "is_configured": false, 00:18:47.479 "data_offset": 0, 00:18:47.479 "data_size": 63488 00:18:47.479 }, 00:18:47.479 { 00:18:47.479 "name": null, 00:18:47.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.479 "is_configured": false, 00:18:47.479 "data_offset": 2048, 00:18:47.479 "data_size": 63488 00:18:47.479 }, 00:18:47.479 { 00:18:47.479 "name": "BaseBdev3", 00:18:47.479 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:47.479 "is_configured": true, 00:18:47.479 "data_offset": 2048, 00:18:47.479 "data_size": 63488 00:18:47.479 }, 00:18:47.479 { 00:18:47.479 "name": "BaseBdev4", 00:18:47.479 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:47.479 "is_configured": true, 00:18:47.479 "data_offset": 2048, 00:18:47.479 "data_size": 63488 00:18:47.479 } 00:18:47.479 ] 00:18:47.479 }' 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.479 07:15:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.046 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:48.046 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:48.046 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:48.046 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:48.046 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:48.046 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.046 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.046 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.046 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.046 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.046 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:48.046 "name": "raid_bdev1", 00:18:48.046 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:48.046 "strip_size_kb": 0, 00:18:48.046 "state": "online", 00:18:48.046 "raid_level": "raid1", 00:18:48.046 "superblock": true, 00:18:48.046 "num_base_bdevs": 4, 00:18:48.046 "num_base_bdevs_discovered": 2, 00:18:48.046 "num_base_bdevs_operational": 2, 00:18:48.046 "base_bdevs_list": [ 00:18:48.046 { 00:18:48.046 "name": null, 00:18:48.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.046 "is_configured": false, 00:18:48.046 "data_offset": 0, 00:18:48.046 "data_size": 63488 00:18:48.046 }, 00:18:48.046 { 00:18:48.046 "name": null, 00:18:48.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.046 "is_configured": false, 00:18:48.046 "data_offset": 2048, 00:18:48.046 "data_size": 63488 00:18:48.046 }, 00:18:48.046 { 00:18:48.046 "name": "BaseBdev3", 00:18:48.046 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:48.046 "is_configured": true, 00:18:48.046 "data_offset": 2048, 00:18:48.046 "data_size": 63488 00:18:48.047 }, 00:18:48.047 { 00:18:48.047 "name": "BaseBdev4", 00:18:48.047 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:48.047 "is_configured": true, 00:18:48.047 "data_offset": 2048, 00:18:48.047 "data_size": 63488 00:18:48.047 } 00:18:48.047 ] 00:18:48.047 }' 00:18:48.047 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:48.047 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:48.047 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:48.047 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:48.047 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:48.047 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.047 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.047 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.047 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:48.047 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.047 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:48.047 [2024-11-20 07:15:45.319673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:48.047 [2024-11-20 07:15:45.319750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.047 [2024-11-20 07:15:45.319789] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:18:48.047 [2024-11-20 07:15:45.319805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.047 [2024-11-20 07:15:45.320432] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.047 [2024-11-20 07:15:45.320466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:48.047 [2024-11-20 07:15:45.320579] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:48.047 [2024-11-20 07:15:45.320605] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:48.047 [2024-11-20 07:15:45.320619] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:48.047 [2024-11-20 07:15:45.320632] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:48.047 BaseBdev1 00:18:48.047 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.047 07:15:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.423 "name": "raid_bdev1", 00:18:49.423 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:49.423 "strip_size_kb": 0, 00:18:49.423 "state": "online", 00:18:49.423 "raid_level": "raid1", 00:18:49.423 "superblock": true, 00:18:49.423 "num_base_bdevs": 4, 00:18:49.423 "num_base_bdevs_discovered": 2, 00:18:49.423 "num_base_bdevs_operational": 2, 00:18:49.423 "base_bdevs_list": [ 00:18:49.423 { 00:18:49.423 "name": null, 00:18:49.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.423 "is_configured": false, 00:18:49.423 "data_offset": 0, 00:18:49.423 "data_size": 63488 00:18:49.423 }, 00:18:49.423 { 00:18:49.423 "name": null, 00:18:49.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.423 "is_configured": false, 00:18:49.423 "data_offset": 2048, 00:18:49.423 "data_size": 63488 00:18:49.423 }, 00:18:49.423 { 00:18:49.423 "name": "BaseBdev3", 00:18:49.423 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:49.423 "is_configured": true, 00:18:49.423 "data_offset": 2048, 00:18:49.423 "data_size": 63488 00:18:49.423 }, 00:18:49.423 { 00:18:49.423 "name": "BaseBdev4", 00:18:49.423 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:49.423 "is_configured": true, 00:18:49.423 "data_offset": 2048, 00:18:49.423 "data_size": 63488 00:18:49.423 } 00:18:49.423 ] 00:18:49.423 }' 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.423 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.682 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:49.682 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.682 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:49.682 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:49.682 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.682 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.682 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.682 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.682 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.682 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.682 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.682 "name": "raid_bdev1", 00:18:49.682 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:49.682 "strip_size_kb": 0, 00:18:49.682 "state": "online", 00:18:49.682 "raid_level": "raid1", 00:18:49.682 "superblock": true, 00:18:49.682 "num_base_bdevs": 4, 00:18:49.682 "num_base_bdevs_discovered": 2, 00:18:49.682 "num_base_bdevs_operational": 2, 00:18:49.682 "base_bdevs_list": [ 00:18:49.682 { 00:18:49.682 "name": null, 00:18:49.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.682 "is_configured": false, 00:18:49.682 "data_offset": 0, 00:18:49.682 "data_size": 63488 00:18:49.682 }, 00:18:49.682 { 00:18:49.682 "name": null, 00:18:49.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.682 "is_configured": false, 00:18:49.682 "data_offset": 2048, 00:18:49.682 "data_size": 63488 00:18:49.682 }, 00:18:49.682 { 00:18:49.683 "name": "BaseBdev3", 00:18:49.683 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:49.683 "is_configured": true, 00:18:49.683 "data_offset": 2048, 00:18:49.683 "data_size": 63488 00:18:49.683 }, 00:18:49.683 { 00:18:49.683 "name": "BaseBdev4", 00:18:49.683 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:49.683 "is_configured": true, 00:18:49.683 "data_offset": 2048, 00:18:49.683 "data_size": 63488 00:18:49.683 } 00:18:49.683 ] 00:18:49.683 }' 00:18:49.683 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.683 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:49.683 07:15:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.941 [2024-11-20 07:15:47.020549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:49.941 [2024-11-20 07:15:47.020791] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:18:49.941 [2024-11-20 07:15:47.020819] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:49.941 request: 00:18:49.941 { 00:18:49.941 "base_bdev": "BaseBdev1", 00:18:49.941 "raid_bdev": "raid_bdev1", 00:18:49.941 "method": "bdev_raid_add_base_bdev", 00:18:49.941 "req_id": 1 00:18:49.941 } 00:18:49.941 Got JSON-RPC error response 00:18:49.941 response: 00:18:49.941 { 00:18:49.941 "code": -22, 00:18:49.941 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:49.941 } 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:49.941 07:15:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:50.879 "name": "raid_bdev1", 00:18:50.879 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:50.879 "strip_size_kb": 0, 00:18:50.879 "state": "online", 00:18:50.879 "raid_level": "raid1", 00:18:50.879 "superblock": true, 00:18:50.879 "num_base_bdevs": 4, 00:18:50.879 "num_base_bdevs_discovered": 2, 00:18:50.879 "num_base_bdevs_operational": 2, 00:18:50.879 "base_bdevs_list": [ 00:18:50.879 { 00:18:50.879 "name": null, 00:18:50.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.879 "is_configured": false, 00:18:50.879 "data_offset": 0, 00:18:50.879 "data_size": 63488 00:18:50.879 }, 00:18:50.879 { 00:18:50.879 "name": null, 00:18:50.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.879 "is_configured": false, 00:18:50.879 "data_offset": 2048, 00:18:50.879 "data_size": 63488 00:18:50.879 }, 00:18:50.879 { 00:18:50.879 "name": "BaseBdev3", 00:18:50.879 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:50.879 "is_configured": true, 00:18:50.879 "data_offset": 2048, 00:18:50.879 "data_size": 63488 00:18:50.879 }, 00:18:50.879 { 00:18:50.879 "name": "BaseBdev4", 00:18:50.879 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:50.879 "is_configured": true, 00:18:50.879 "data_offset": 2048, 00:18:50.879 "data_size": 63488 00:18:50.879 } 00:18:50.879 ] 00:18:50.879 }' 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:50.879 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.446 "name": "raid_bdev1", 00:18:51.446 "uuid": "74361d71-b185-4bef-83f9-fbc38b0dd24b", 00:18:51.446 "strip_size_kb": 0, 00:18:51.446 "state": "online", 00:18:51.446 "raid_level": "raid1", 00:18:51.446 "superblock": true, 00:18:51.446 "num_base_bdevs": 4, 00:18:51.446 "num_base_bdevs_discovered": 2, 00:18:51.446 "num_base_bdevs_operational": 2, 00:18:51.446 "base_bdevs_list": [ 00:18:51.446 { 00:18:51.446 "name": null, 00:18:51.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.446 "is_configured": false, 00:18:51.446 "data_offset": 0, 00:18:51.446 "data_size": 63488 00:18:51.446 }, 00:18:51.446 { 00:18:51.446 "name": null, 00:18:51.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.446 "is_configured": false, 00:18:51.446 "data_offset": 2048, 00:18:51.446 "data_size": 63488 00:18:51.446 }, 00:18:51.446 { 00:18:51.446 "name": "BaseBdev3", 00:18:51.446 "uuid": "354221ee-2a00-555f-a248-20bc49157731", 00:18:51.446 "is_configured": true, 00:18:51.446 "data_offset": 2048, 00:18:51.446 "data_size": 63488 00:18:51.446 }, 00:18:51.446 { 00:18:51.446 "name": "BaseBdev4", 00:18:51.446 "uuid": "6b25753b-9add-5f70-b8d7-f2f79b73d3a8", 00:18:51.446 "is_configured": true, 00:18:51.446 "data_offset": 2048, 00:18:51.446 "data_size": 63488 00:18:51.446 } 00:18:51.446 ] 00:18:51.446 }' 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79404 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79404 ']' 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79404 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.446 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79404 00:18:51.446 killing process with pid 79404 00:18:51.446 Received shutdown signal, test time was about 19.506404 seconds 00:18:51.446 00:18:51.446 Latency(us) 00:18:51.446 [2024-11-20T07:15:48.766Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.446 [2024-11-20T07:15:48.766Z] =================================================================================================================== 00:18:51.446 [2024-11-20T07:15:48.766Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:51.447 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:51.447 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:51.447 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79404' 00:18:51.447 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79404 00:18:51.447 [2024-11-20 07:15:48.732348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:51.447 07:15:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79404 00:18:51.447 [2024-11-20 07:15:48.732512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:51.447 [2024-11-20 07:15:48.732603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:51.447 [2024-11-20 07:15:48.732625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:52.030 [2024-11-20 07:15:49.110374] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:52.968 07:15:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:18:52.968 00:18:52.968 real 0m23.168s 00:18:52.968 user 0m31.442s 00:18:52.968 sys 0m2.332s 00:18:52.968 07:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.968 07:15:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.968 ************************************ 00:18:52.968 END TEST raid_rebuild_test_sb_io 00:18:52.968 ************************************ 00:18:52.968 07:15:50 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:52.968 07:15:50 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:18:52.968 07:15:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:52.968 07:15:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.968 07:15:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.968 ************************************ 00:18:52.968 START TEST raid5f_state_function_test 00:18:52.968 ************************************ 00:18:52.968 07:15:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:18:52.968 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:52.968 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:18:52.968 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:52.968 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:52.968 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:52.969 Process raid pid: 80144 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80144 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80144' 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80144 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80144 ']' 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.969 07:15:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.227 [2024-11-20 07:15:50.386912] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:18:53.227 [2024-11-20 07:15:50.387085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.486 [2024-11-20 07:15:50.580550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.486 [2024-11-20 07:15:50.741993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.744 [2024-11-20 07:15:50.957320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.744 [2024-11-20 07:15:50.957405] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.311 [2024-11-20 07:15:51.388983] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:54.311 [2024-11-20 07:15:51.389048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:54.311 [2024-11-20 07:15:51.389066] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:54.311 [2024-11-20 07:15:51.389083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:54.311 [2024-11-20 07:15:51.389093] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:54.311 [2024-11-20 07:15:51.389107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.311 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.311 "name": "Existed_Raid", 00:18:54.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.311 "strip_size_kb": 64, 00:18:54.311 "state": "configuring", 00:18:54.311 "raid_level": "raid5f", 00:18:54.311 "superblock": false, 00:18:54.312 "num_base_bdevs": 3, 00:18:54.312 "num_base_bdevs_discovered": 0, 00:18:54.312 "num_base_bdevs_operational": 3, 00:18:54.312 "base_bdevs_list": [ 00:18:54.312 { 00:18:54.312 "name": "BaseBdev1", 00:18:54.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.312 "is_configured": false, 00:18:54.312 "data_offset": 0, 00:18:54.312 "data_size": 0 00:18:54.312 }, 00:18:54.312 { 00:18:54.312 "name": "BaseBdev2", 00:18:54.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.312 "is_configured": false, 00:18:54.312 "data_offset": 0, 00:18:54.312 "data_size": 0 00:18:54.312 }, 00:18:54.312 { 00:18:54.312 "name": "BaseBdev3", 00:18:54.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.312 "is_configured": false, 00:18:54.312 "data_offset": 0, 00:18:54.312 "data_size": 0 00:18:54.312 } 00:18:54.312 ] 00:18:54.312 }' 00:18:54.312 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.312 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.880 [2024-11-20 07:15:51.925065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:54.880 [2024-11-20 07:15:51.925109] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.880 [2024-11-20 07:15:51.937067] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:54.880 [2024-11-20 07:15:51.937294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:54.880 [2024-11-20 07:15:51.937420] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:54.880 [2024-11-20 07:15:51.937482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:54.880 [2024-11-20 07:15:51.937690] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:54.880 [2024-11-20 07:15:51.937763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.880 [2024-11-20 07:15:51.986348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:54.880 BaseBdev1 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:54.880 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:54.881 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:54.881 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:54.881 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.881 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.881 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.881 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:54.881 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.881 07:15:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.881 [ 00:18:54.881 { 00:18:54.881 "name": "BaseBdev1", 00:18:54.881 "aliases": [ 00:18:54.881 "b37d54d6-5778-492b-be0f-fae89e95df6f" 00:18:54.881 ], 00:18:54.881 "product_name": "Malloc disk", 00:18:54.881 "block_size": 512, 00:18:54.881 "num_blocks": 65536, 00:18:54.881 "uuid": "b37d54d6-5778-492b-be0f-fae89e95df6f", 00:18:54.881 "assigned_rate_limits": { 00:18:54.881 "rw_ios_per_sec": 0, 00:18:54.881 "rw_mbytes_per_sec": 0, 00:18:54.881 "r_mbytes_per_sec": 0, 00:18:54.881 "w_mbytes_per_sec": 0 00:18:54.881 }, 00:18:54.881 "claimed": true, 00:18:54.881 "claim_type": "exclusive_write", 00:18:54.881 "zoned": false, 00:18:54.881 "supported_io_types": { 00:18:54.881 "read": true, 00:18:54.881 "write": true, 00:18:54.881 "unmap": true, 00:18:54.881 "flush": true, 00:18:54.881 "reset": true, 00:18:54.881 "nvme_admin": false, 00:18:54.881 "nvme_io": false, 00:18:54.881 "nvme_io_md": false, 00:18:54.881 "write_zeroes": true, 00:18:54.881 "zcopy": true, 00:18:54.881 "get_zone_info": false, 00:18:54.881 "zone_management": false, 00:18:54.881 "zone_append": false, 00:18:54.881 "compare": false, 00:18:54.881 "compare_and_write": false, 00:18:54.881 "abort": true, 00:18:54.881 "seek_hole": false, 00:18:54.881 "seek_data": false, 00:18:54.881 "copy": true, 00:18:54.881 "nvme_iov_md": false 00:18:54.881 }, 00:18:54.881 "memory_domains": [ 00:18:54.881 { 00:18:54.881 "dma_device_id": "system", 00:18:54.881 "dma_device_type": 1 00:18:54.881 }, 00:18:54.881 { 00:18:54.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.881 "dma_device_type": 2 00:18:54.881 } 00:18:54.881 ], 00:18:54.881 "driver_specific": {} 00:18:54.881 } 00:18:54.881 ] 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:54.881 "name": "Existed_Raid", 00:18:54.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.881 "strip_size_kb": 64, 00:18:54.881 "state": "configuring", 00:18:54.881 "raid_level": "raid5f", 00:18:54.881 "superblock": false, 00:18:54.881 "num_base_bdevs": 3, 00:18:54.881 "num_base_bdevs_discovered": 1, 00:18:54.881 "num_base_bdevs_operational": 3, 00:18:54.881 "base_bdevs_list": [ 00:18:54.881 { 00:18:54.881 "name": "BaseBdev1", 00:18:54.881 "uuid": "b37d54d6-5778-492b-be0f-fae89e95df6f", 00:18:54.881 "is_configured": true, 00:18:54.881 "data_offset": 0, 00:18:54.881 "data_size": 65536 00:18:54.881 }, 00:18:54.881 { 00:18:54.881 "name": "BaseBdev2", 00:18:54.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.881 "is_configured": false, 00:18:54.881 "data_offset": 0, 00:18:54.881 "data_size": 0 00:18:54.881 }, 00:18:54.881 { 00:18:54.881 "name": "BaseBdev3", 00:18:54.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.881 "is_configured": false, 00:18:54.881 "data_offset": 0, 00:18:54.881 "data_size": 0 00:18:54.881 } 00:18:54.881 ] 00:18:54.881 }' 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:54.881 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.448 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:55.448 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.448 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.448 [2024-11-20 07:15:52.538654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:55.448 [2024-11-20 07:15:52.538727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.449 [2024-11-20 07:15:52.546691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:55.449 [2024-11-20 07:15:52.549786] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:55.449 [2024-11-20 07:15:52.550011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:55.449 [2024-11-20 07:15:52.550045] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:55.449 [2024-11-20 07:15:52.550067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.449 "name": "Existed_Raid", 00:18:55.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.449 "strip_size_kb": 64, 00:18:55.449 "state": "configuring", 00:18:55.449 "raid_level": "raid5f", 00:18:55.449 "superblock": false, 00:18:55.449 "num_base_bdevs": 3, 00:18:55.449 "num_base_bdevs_discovered": 1, 00:18:55.449 "num_base_bdevs_operational": 3, 00:18:55.449 "base_bdevs_list": [ 00:18:55.449 { 00:18:55.449 "name": "BaseBdev1", 00:18:55.449 "uuid": "b37d54d6-5778-492b-be0f-fae89e95df6f", 00:18:55.449 "is_configured": true, 00:18:55.449 "data_offset": 0, 00:18:55.449 "data_size": 65536 00:18:55.449 }, 00:18:55.449 { 00:18:55.449 "name": "BaseBdev2", 00:18:55.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.449 "is_configured": false, 00:18:55.449 "data_offset": 0, 00:18:55.449 "data_size": 0 00:18:55.449 }, 00:18:55.449 { 00:18:55.449 "name": "BaseBdev3", 00:18:55.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.449 "is_configured": false, 00:18:55.449 "data_offset": 0, 00:18:55.449 "data_size": 0 00:18:55.449 } 00:18:55.449 ] 00:18:55.449 }' 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.449 07:15:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.016 [2024-11-20 07:15:53.105047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:56.016 BaseBdev2 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.016 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.016 [ 00:18:56.016 { 00:18:56.016 "name": "BaseBdev2", 00:18:56.016 "aliases": [ 00:18:56.016 "32f46187-8530-4378-bd8b-4079eab37ab9" 00:18:56.016 ], 00:18:56.016 "product_name": "Malloc disk", 00:18:56.016 "block_size": 512, 00:18:56.016 "num_blocks": 65536, 00:18:56.016 "uuid": "32f46187-8530-4378-bd8b-4079eab37ab9", 00:18:56.016 "assigned_rate_limits": { 00:18:56.016 "rw_ios_per_sec": 0, 00:18:56.016 "rw_mbytes_per_sec": 0, 00:18:56.016 "r_mbytes_per_sec": 0, 00:18:56.016 "w_mbytes_per_sec": 0 00:18:56.016 }, 00:18:56.016 "claimed": true, 00:18:56.016 "claim_type": "exclusive_write", 00:18:56.016 "zoned": false, 00:18:56.016 "supported_io_types": { 00:18:56.016 "read": true, 00:18:56.016 "write": true, 00:18:56.016 "unmap": true, 00:18:56.016 "flush": true, 00:18:56.016 "reset": true, 00:18:56.016 "nvme_admin": false, 00:18:56.016 "nvme_io": false, 00:18:56.016 "nvme_io_md": false, 00:18:56.016 "write_zeroes": true, 00:18:56.016 "zcopy": true, 00:18:56.016 "get_zone_info": false, 00:18:56.016 "zone_management": false, 00:18:56.016 "zone_append": false, 00:18:56.016 "compare": false, 00:18:56.016 "compare_and_write": false, 00:18:56.016 "abort": true, 00:18:56.016 "seek_hole": false, 00:18:56.016 "seek_data": false, 00:18:56.016 "copy": true, 00:18:56.016 "nvme_iov_md": false 00:18:56.016 }, 00:18:56.017 "memory_domains": [ 00:18:56.017 { 00:18:56.017 "dma_device_id": "system", 00:18:56.017 "dma_device_type": 1 00:18:56.017 }, 00:18:56.017 { 00:18:56.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.017 "dma_device_type": 2 00:18:56.017 } 00:18:56.017 ], 00:18:56.017 "driver_specific": {} 00:18:56.017 } 00:18:56.017 ] 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.017 "name": "Existed_Raid", 00:18:56.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.017 "strip_size_kb": 64, 00:18:56.017 "state": "configuring", 00:18:56.017 "raid_level": "raid5f", 00:18:56.017 "superblock": false, 00:18:56.017 "num_base_bdevs": 3, 00:18:56.017 "num_base_bdevs_discovered": 2, 00:18:56.017 "num_base_bdevs_operational": 3, 00:18:56.017 "base_bdevs_list": [ 00:18:56.017 { 00:18:56.017 "name": "BaseBdev1", 00:18:56.017 "uuid": "b37d54d6-5778-492b-be0f-fae89e95df6f", 00:18:56.017 "is_configured": true, 00:18:56.017 "data_offset": 0, 00:18:56.017 "data_size": 65536 00:18:56.017 }, 00:18:56.017 { 00:18:56.017 "name": "BaseBdev2", 00:18:56.017 "uuid": "32f46187-8530-4378-bd8b-4079eab37ab9", 00:18:56.017 "is_configured": true, 00:18:56.017 "data_offset": 0, 00:18:56.017 "data_size": 65536 00:18:56.017 }, 00:18:56.017 { 00:18:56.017 "name": "BaseBdev3", 00:18:56.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.017 "is_configured": false, 00:18:56.017 "data_offset": 0, 00:18:56.017 "data_size": 0 00:18:56.017 } 00:18:56.017 ] 00:18:56.017 }' 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.017 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.628 [2024-11-20 07:15:53.705453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:56.628 [2024-11-20 07:15:53.705767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:56.628 [2024-11-20 07:15:53.705799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:18:56.628 [2024-11-20 07:15:53.706159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:56.628 [2024-11-20 07:15:53.711446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:56.628 BaseBdev3 00:18:56.628 [2024-11-20 07:15:53.711594] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:56.628 [2024-11-20 07:15:53.711999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.628 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.629 [ 00:18:56.629 { 00:18:56.629 "name": "BaseBdev3", 00:18:56.629 "aliases": [ 00:18:56.629 "6b2cc76b-957d-4b8f-bcdd-42062bde515c" 00:18:56.629 ], 00:18:56.629 "product_name": "Malloc disk", 00:18:56.629 "block_size": 512, 00:18:56.629 "num_blocks": 65536, 00:18:56.629 "uuid": "6b2cc76b-957d-4b8f-bcdd-42062bde515c", 00:18:56.629 "assigned_rate_limits": { 00:18:56.629 "rw_ios_per_sec": 0, 00:18:56.629 "rw_mbytes_per_sec": 0, 00:18:56.629 "r_mbytes_per_sec": 0, 00:18:56.629 "w_mbytes_per_sec": 0 00:18:56.629 }, 00:18:56.629 "claimed": true, 00:18:56.629 "claim_type": "exclusive_write", 00:18:56.629 "zoned": false, 00:18:56.629 "supported_io_types": { 00:18:56.629 "read": true, 00:18:56.629 "write": true, 00:18:56.629 "unmap": true, 00:18:56.629 "flush": true, 00:18:56.629 "reset": true, 00:18:56.629 "nvme_admin": false, 00:18:56.629 "nvme_io": false, 00:18:56.629 "nvme_io_md": false, 00:18:56.629 "write_zeroes": true, 00:18:56.629 "zcopy": true, 00:18:56.629 "get_zone_info": false, 00:18:56.629 "zone_management": false, 00:18:56.629 "zone_append": false, 00:18:56.629 "compare": false, 00:18:56.629 "compare_and_write": false, 00:18:56.629 "abort": true, 00:18:56.629 "seek_hole": false, 00:18:56.629 "seek_data": false, 00:18:56.629 "copy": true, 00:18:56.629 "nvme_iov_md": false 00:18:56.629 }, 00:18:56.629 "memory_domains": [ 00:18:56.629 { 00:18:56.629 "dma_device_id": "system", 00:18:56.629 "dma_device_type": 1 00:18:56.629 }, 00:18:56.629 { 00:18:56.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.629 "dma_device_type": 2 00:18:56.629 } 00:18:56.629 ], 00:18:56.629 "driver_specific": {} 00:18:56.629 } 00:18:56.629 ] 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:56.629 "name": "Existed_Raid", 00:18:56.629 "uuid": "b733b72b-026f-4025-a4de-1cff5172510f", 00:18:56.629 "strip_size_kb": 64, 00:18:56.629 "state": "online", 00:18:56.629 "raid_level": "raid5f", 00:18:56.629 "superblock": false, 00:18:56.629 "num_base_bdevs": 3, 00:18:56.629 "num_base_bdevs_discovered": 3, 00:18:56.629 "num_base_bdevs_operational": 3, 00:18:56.629 "base_bdevs_list": [ 00:18:56.629 { 00:18:56.629 "name": "BaseBdev1", 00:18:56.629 "uuid": "b37d54d6-5778-492b-be0f-fae89e95df6f", 00:18:56.629 "is_configured": true, 00:18:56.629 "data_offset": 0, 00:18:56.629 "data_size": 65536 00:18:56.629 }, 00:18:56.629 { 00:18:56.629 "name": "BaseBdev2", 00:18:56.629 "uuid": "32f46187-8530-4378-bd8b-4079eab37ab9", 00:18:56.629 "is_configured": true, 00:18:56.629 "data_offset": 0, 00:18:56.629 "data_size": 65536 00:18:56.629 }, 00:18:56.629 { 00:18:56.629 "name": "BaseBdev3", 00:18:56.629 "uuid": "6b2cc76b-957d-4b8f-bcdd-42062bde515c", 00:18:56.629 "is_configured": true, 00:18:56.629 "data_offset": 0, 00:18:56.629 "data_size": 65536 00:18:56.629 } 00:18:56.629 ] 00:18:56.629 }' 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:56.629 07:15:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.198 [2024-11-20 07:15:54.233998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:57.198 "name": "Existed_Raid", 00:18:57.198 "aliases": [ 00:18:57.198 "b733b72b-026f-4025-a4de-1cff5172510f" 00:18:57.198 ], 00:18:57.198 "product_name": "Raid Volume", 00:18:57.198 "block_size": 512, 00:18:57.198 "num_blocks": 131072, 00:18:57.198 "uuid": "b733b72b-026f-4025-a4de-1cff5172510f", 00:18:57.198 "assigned_rate_limits": { 00:18:57.198 "rw_ios_per_sec": 0, 00:18:57.198 "rw_mbytes_per_sec": 0, 00:18:57.198 "r_mbytes_per_sec": 0, 00:18:57.198 "w_mbytes_per_sec": 0 00:18:57.198 }, 00:18:57.198 "claimed": false, 00:18:57.198 "zoned": false, 00:18:57.198 "supported_io_types": { 00:18:57.198 "read": true, 00:18:57.198 "write": true, 00:18:57.198 "unmap": false, 00:18:57.198 "flush": false, 00:18:57.198 "reset": true, 00:18:57.198 "nvme_admin": false, 00:18:57.198 "nvme_io": false, 00:18:57.198 "nvme_io_md": false, 00:18:57.198 "write_zeroes": true, 00:18:57.198 "zcopy": false, 00:18:57.198 "get_zone_info": false, 00:18:57.198 "zone_management": false, 00:18:57.198 "zone_append": false, 00:18:57.198 "compare": false, 00:18:57.198 "compare_and_write": false, 00:18:57.198 "abort": false, 00:18:57.198 "seek_hole": false, 00:18:57.198 "seek_data": false, 00:18:57.198 "copy": false, 00:18:57.198 "nvme_iov_md": false 00:18:57.198 }, 00:18:57.198 "driver_specific": { 00:18:57.198 "raid": { 00:18:57.198 "uuid": "b733b72b-026f-4025-a4de-1cff5172510f", 00:18:57.198 "strip_size_kb": 64, 00:18:57.198 "state": "online", 00:18:57.198 "raid_level": "raid5f", 00:18:57.198 "superblock": false, 00:18:57.198 "num_base_bdevs": 3, 00:18:57.198 "num_base_bdevs_discovered": 3, 00:18:57.198 "num_base_bdevs_operational": 3, 00:18:57.198 "base_bdevs_list": [ 00:18:57.198 { 00:18:57.198 "name": "BaseBdev1", 00:18:57.198 "uuid": "b37d54d6-5778-492b-be0f-fae89e95df6f", 00:18:57.198 "is_configured": true, 00:18:57.198 "data_offset": 0, 00:18:57.198 "data_size": 65536 00:18:57.198 }, 00:18:57.198 { 00:18:57.198 "name": "BaseBdev2", 00:18:57.198 "uuid": "32f46187-8530-4378-bd8b-4079eab37ab9", 00:18:57.198 "is_configured": true, 00:18:57.198 "data_offset": 0, 00:18:57.198 "data_size": 65536 00:18:57.198 }, 00:18:57.198 { 00:18:57.198 "name": "BaseBdev3", 00:18:57.198 "uuid": "6b2cc76b-957d-4b8f-bcdd-42062bde515c", 00:18:57.198 "is_configured": true, 00:18:57.198 "data_offset": 0, 00:18:57.198 "data_size": 65536 00:18:57.198 } 00:18:57.198 ] 00:18:57.198 } 00:18:57.198 } 00:18:57.198 }' 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:57.198 BaseBdev2 00:18:57.198 BaseBdev3' 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.198 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.457 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:57.457 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.458 [2024-11-20 07:15:54.549855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:57.458 "name": "Existed_Raid", 00:18:57.458 "uuid": "b733b72b-026f-4025-a4de-1cff5172510f", 00:18:57.458 "strip_size_kb": 64, 00:18:57.458 "state": "online", 00:18:57.458 "raid_level": "raid5f", 00:18:57.458 "superblock": false, 00:18:57.458 "num_base_bdevs": 3, 00:18:57.458 "num_base_bdevs_discovered": 2, 00:18:57.458 "num_base_bdevs_operational": 2, 00:18:57.458 "base_bdevs_list": [ 00:18:57.458 { 00:18:57.458 "name": null, 00:18:57.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:57.458 "is_configured": false, 00:18:57.458 "data_offset": 0, 00:18:57.458 "data_size": 65536 00:18:57.458 }, 00:18:57.458 { 00:18:57.458 "name": "BaseBdev2", 00:18:57.458 "uuid": "32f46187-8530-4378-bd8b-4079eab37ab9", 00:18:57.458 "is_configured": true, 00:18:57.458 "data_offset": 0, 00:18:57.458 "data_size": 65536 00:18:57.458 }, 00:18:57.458 { 00:18:57.458 "name": "BaseBdev3", 00:18:57.458 "uuid": "6b2cc76b-957d-4b8f-bcdd-42062bde515c", 00:18:57.458 "is_configured": true, 00:18:57.458 "data_offset": 0, 00:18:57.458 "data_size": 65536 00:18:57.458 } 00:18:57.458 ] 00:18:57.458 }' 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:57.458 07:15:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.025 [2024-11-20 07:15:55.221212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:58.025 [2024-11-20 07:15:55.221506] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.025 [2024-11-20 07:15:55.311051] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.025 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.284 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:58.284 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:58.284 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.285 [2024-11-20 07:15:55.371143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:58.285 [2024-11-20 07:15:55.371348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.285 BaseBdev2 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.285 [ 00:18:58.285 { 00:18:58.285 "name": "BaseBdev2", 00:18:58.285 "aliases": [ 00:18:58.285 "66b7f0ff-7b24-47c2-8bf1-f4a05e6e3a4d" 00:18:58.285 ], 00:18:58.285 "product_name": "Malloc disk", 00:18:58.285 "block_size": 512, 00:18:58.285 "num_blocks": 65536, 00:18:58.285 "uuid": "66b7f0ff-7b24-47c2-8bf1-f4a05e6e3a4d", 00:18:58.285 "assigned_rate_limits": { 00:18:58.285 "rw_ios_per_sec": 0, 00:18:58.285 "rw_mbytes_per_sec": 0, 00:18:58.285 "r_mbytes_per_sec": 0, 00:18:58.285 "w_mbytes_per_sec": 0 00:18:58.285 }, 00:18:58.285 "claimed": false, 00:18:58.285 "zoned": false, 00:18:58.285 "supported_io_types": { 00:18:58.285 "read": true, 00:18:58.285 "write": true, 00:18:58.285 "unmap": true, 00:18:58.285 "flush": true, 00:18:58.285 "reset": true, 00:18:58.285 "nvme_admin": false, 00:18:58.285 "nvme_io": false, 00:18:58.285 "nvme_io_md": false, 00:18:58.285 "write_zeroes": true, 00:18:58.285 "zcopy": true, 00:18:58.285 "get_zone_info": false, 00:18:58.285 "zone_management": false, 00:18:58.285 "zone_append": false, 00:18:58.285 "compare": false, 00:18:58.285 "compare_and_write": false, 00:18:58.285 "abort": true, 00:18:58.285 "seek_hole": false, 00:18:58.285 "seek_data": false, 00:18:58.285 "copy": true, 00:18:58.285 "nvme_iov_md": false 00:18:58.285 }, 00:18:58.285 "memory_domains": [ 00:18:58.285 { 00:18:58.285 "dma_device_id": "system", 00:18:58.285 "dma_device_type": 1 00:18:58.285 }, 00:18:58.285 { 00:18:58.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.285 "dma_device_type": 2 00:18:58.285 } 00:18:58.285 ], 00:18:58.285 "driver_specific": {} 00:18:58.285 } 00:18:58.285 ] 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.285 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.545 BaseBdev3 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.545 [ 00:18:58.545 { 00:18:58.545 "name": "BaseBdev3", 00:18:58.545 "aliases": [ 00:18:58.545 "a20fdf3f-a9cf-4bbf-9f4b-e8d72cddb0c4" 00:18:58.545 ], 00:18:58.545 "product_name": "Malloc disk", 00:18:58.545 "block_size": 512, 00:18:58.545 "num_blocks": 65536, 00:18:58.545 "uuid": "a20fdf3f-a9cf-4bbf-9f4b-e8d72cddb0c4", 00:18:58.545 "assigned_rate_limits": { 00:18:58.545 "rw_ios_per_sec": 0, 00:18:58.545 "rw_mbytes_per_sec": 0, 00:18:58.545 "r_mbytes_per_sec": 0, 00:18:58.545 "w_mbytes_per_sec": 0 00:18:58.545 }, 00:18:58.545 "claimed": false, 00:18:58.545 "zoned": false, 00:18:58.545 "supported_io_types": { 00:18:58.545 "read": true, 00:18:58.545 "write": true, 00:18:58.545 "unmap": true, 00:18:58.545 "flush": true, 00:18:58.545 "reset": true, 00:18:58.545 "nvme_admin": false, 00:18:58.545 "nvme_io": false, 00:18:58.545 "nvme_io_md": false, 00:18:58.545 "write_zeroes": true, 00:18:58.545 "zcopy": true, 00:18:58.545 "get_zone_info": false, 00:18:58.545 "zone_management": false, 00:18:58.545 "zone_append": false, 00:18:58.545 "compare": false, 00:18:58.545 "compare_and_write": false, 00:18:58.545 "abort": true, 00:18:58.545 "seek_hole": false, 00:18:58.545 "seek_data": false, 00:18:58.545 "copy": true, 00:18:58.545 "nvme_iov_md": false 00:18:58.545 }, 00:18:58.545 "memory_domains": [ 00:18:58.545 { 00:18:58.545 "dma_device_id": "system", 00:18:58.545 "dma_device_type": 1 00:18:58.545 }, 00:18:58.545 { 00:18:58.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:58.545 "dma_device_type": 2 00:18:58.545 } 00:18:58.545 ], 00:18:58.545 "driver_specific": {} 00:18:58.545 } 00:18:58.545 ] 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.545 [2024-11-20 07:15:55.668285] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:58.545 [2024-11-20 07:15:55.668474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:58.545 [2024-11-20 07:15:55.668605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:58.545 [2024-11-20 07:15:55.671103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:58.545 "name": "Existed_Raid", 00:18:58.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.545 "strip_size_kb": 64, 00:18:58.545 "state": "configuring", 00:18:58.545 "raid_level": "raid5f", 00:18:58.545 "superblock": false, 00:18:58.545 "num_base_bdevs": 3, 00:18:58.545 "num_base_bdevs_discovered": 2, 00:18:58.545 "num_base_bdevs_operational": 3, 00:18:58.545 "base_bdevs_list": [ 00:18:58.545 { 00:18:58.545 "name": "BaseBdev1", 00:18:58.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:58.545 "is_configured": false, 00:18:58.545 "data_offset": 0, 00:18:58.545 "data_size": 0 00:18:58.545 }, 00:18:58.545 { 00:18:58.545 "name": "BaseBdev2", 00:18:58.545 "uuid": "66b7f0ff-7b24-47c2-8bf1-f4a05e6e3a4d", 00:18:58.545 "is_configured": true, 00:18:58.545 "data_offset": 0, 00:18:58.545 "data_size": 65536 00:18:58.545 }, 00:18:58.545 { 00:18:58.545 "name": "BaseBdev3", 00:18:58.545 "uuid": "a20fdf3f-a9cf-4bbf-9f4b-e8d72cddb0c4", 00:18:58.545 "is_configured": true, 00:18:58.545 "data_offset": 0, 00:18:58.545 "data_size": 65536 00:18:58.545 } 00:18:58.545 ] 00:18:58.545 }' 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:58.545 07:15:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.113 [2024-11-20 07:15:56.196513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.113 "name": "Existed_Raid", 00:18:59.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.113 "strip_size_kb": 64, 00:18:59.113 "state": "configuring", 00:18:59.113 "raid_level": "raid5f", 00:18:59.113 "superblock": false, 00:18:59.113 "num_base_bdevs": 3, 00:18:59.113 "num_base_bdevs_discovered": 1, 00:18:59.113 "num_base_bdevs_operational": 3, 00:18:59.113 "base_bdevs_list": [ 00:18:59.113 { 00:18:59.113 "name": "BaseBdev1", 00:18:59.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.113 "is_configured": false, 00:18:59.113 "data_offset": 0, 00:18:59.113 "data_size": 0 00:18:59.113 }, 00:18:59.113 { 00:18:59.113 "name": null, 00:18:59.113 "uuid": "66b7f0ff-7b24-47c2-8bf1-f4a05e6e3a4d", 00:18:59.113 "is_configured": false, 00:18:59.113 "data_offset": 0, 00:18:59.113 "data_size": 65536 00:18:59.113 }, 00:18:59.113 { 00:18:59.113 "name": "BaseBdev3", 00:18:59.113 "uuid": "a20fdf3f-a9cf-4bbf-9f4b-e8d72cddb0c4", 00:18:59.113 "is_configured": true, 00:18:59.113 "data_offset": 0, 00:18:59.113 "data_size": 65536 00:18:59.113 } 00:18:59.113 ] 00:18:59.113 }' 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.113 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.681 [2024-11-20 07:15:56.801480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:59.681 BaseBdev1 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.681 [ 00:18:59.681 { 00:18:59.681 "name": "BaseBdev1", 00:18:59.681 "aliases": [ 00:18:59.681 "22802428-9820-4b41-beb3-ba39db2f47cc" 00:18:59.681 ], 00:18:59.681 "product_name": "Malloc disk", 00:18:59.681 "block_size": 512, 00:18:59.681 "num_blocks": 65536, 00:18:59.681 "uuid": "22802428-9820-4b41-beb3-ba39db2f47cc", 00:18:59.681 "assigned_rate_limits": { 00:18:59.681 "rw_ios_per_sec": 0, 00:18:59.681 "rw_mbytes_per_sec": 0, 00:18:59.681 "r_mbytes_per_sec": 0, 00:18:59.681 "w_mbytes_per_sec": 0 00:18:59.681 }, 00:18:59.681 "claimed": true, 00:18:59.681 "claim_type": "exclusive_write", 00:18:59.681 "zoned": false, 00:18:59.681 "supported_io_types": { 00:18:59.681 "read": true, 00:18:59.681 "write": true, 00:18:59.681 "unmap": true, 00:18:59.681 "flush": true, 00:18:59.681 "reset": true, 00:18:59.681 "nvme_admin": false, 00:18:59.681 "nvme_io": false, 00:18:59.681 "nvme_io_md": false, 00:18:59.681 "write_zeroes": true, 00:18:59.681 "zcopy": true, 00:18:59.681 "get_zone_info": false, 00:18:59.681 "zone_management": false, 00:18:59.681 "zone_append": false, 00:18:59.681 "compare": false, 00:18:59.681 "compare_and_write": false, 00:18:59.681 "abort": true, 00:18:59.681 "seek_hole": false, 00:18:59.681 "seek_data": false, 00:18:59.681 "copy": true, 00:18:59.681 "nvme_iov_md": false 00:18:59.681 }, 00:18:59.681 "memory_domains": [ 00:18:59.681 { 00:18:59.681 "dma_device_id": "system", 00:18:59.681 "dma_device_type": 1 00:18:59.681 }, 00:18:59.681 { 00:18:59.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.681 "dma_device_type": 2 00:18:59.681 } 00:18:59.681 ], 00:18:59.681 "driver_specific": {} 00:18:59.681 } 00:18:59.681 ] 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.681 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.681 "name": "Existed_Raid", 00:18:59.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:59.681 "strip_size_kb": 64, 00:18:59.681 "state": "configuring", 00:18:59.681 "raid_level": "raid5f", 00:18:59.681 "superblock": false, 00:18:59.681 "num_base_bdevs": 3, 00:18:59.681 "num_base_bdevs_discovered": 2, 00:18:59.681 "num_base_bdevs_operational": 3, 00:18:59.681 "base_bdevs_list": [ 00:18:59.681 { 00:18:59.681 "name": "BaseBdev1", 00:18:59.681 "uuid": "22802428-9820-4b41-beb3-ba39db2f47cc", 00:18:59.682 "is_configured": true, 00:18:59.682 "data_offset": 0, 00:18:59.682 "data_size": 65536 00:18:59.682 }, 00:18:59.682 { 00:18:59.682 "name": null, 00:18:59.682 "uuid": "66b7f0ff-7b24-47c2-8bf1-f4a05e6e3a4d", 00:18:59.682 "is_configured": false, 00:18:59.682 "data_offset": 0, 00:18:59.682 "data_size": 65536 00:18:59.682 }, 00:18:59.682 { 00:18:59.682 "name": "BaseBdev3", 00:18:59.682 "uuid": "a20fdf3f-a9cf-4bbf-9f4b-e8d72cddb0c4", 00:18:59.682 "is_configured": true, 00:18:59.682 "data_offset": 0, 00:18:59.682 "data_size": 65536 00:18:59.682 } 00:18:59.682 ] 00:18:59.682 }' 00:18:59.682 07:15:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.682 07:15:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.248 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.249 [2024-11-20 07:15:57.437735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.249 "name": "Existed_Raid", 00:19:00.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.249 "strip_size_kb": 64, 00:19:00.249 "state": "configuring", 00:19:00.249 "raid_level": "raid5f", 00:19:00.249 "superblock": false, 00:19:00.249 "num_base_bdevs": 3, 00:19:00.249 "num_base_bdevs_discovered": 1, 00:19:00.249 "num_base_bdevs_operational": 3, 00:19:00.249 "base_bdevs_list": [ 00:19:00.249 { 00:19:00.249 "name": "BaseBdev1", 00:19:00.249 "uuid": "22802428-9820-4b41-beb3-ba39db2f47cc", 00:19:00.249 "is_configured": true, 00:19:00.249 "data_offset": 0, 00:19:00.249 "data_size": 65536 00:19:00.249 }, 00:19:00.249 { 00:19:00.249 "name": null, 00:19:00.249 "uuid": "66b7f0ff-7b24-47c2-8bf1-f4a05e6e3a4d", 00:19:00.249 "is_configured": false, 00:19:00.249 "data_offset": 0, 00:19:00.249 "data_size": 65536 00:19:00.249 }, 00:19:00.249 { 00:19:00.249 "name": null, 00:19:00.249 "uuid": "a20fdf3f-a9cf-4bbf-9f4b-e8d72cddb0c4", 00:19:00.249 "is_configured": false, 00:19:00.249 "data_offset": 0, 00:19:00.249 "data_size": 65536 00:19:00.249 } 00:19:00.249 ] 00:19:00.249 }' 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.249 07:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.864 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.864 07:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.864 07:15:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:00.864 07:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.864 07:15:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.864 [2024-11-20 07:15:58.026017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.864 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:00.864 "name": "Existed_Raid", 00:19:00.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.864 "strip_size_kb": 64, 00:19:00.864 "state": "configuring", 00:19:00.864 "raid_level": "raid5f", 00:19:00.864 "superblock": false, 00:19:00.864 "num_base_bdevs": 3, 00:19:00.864 "num_base_bdevs_discovered": 2, 00:19:00.864 "num_base_bdevs_operational": 3, 00:19:00.864 "base_bdevs_list": [ 00:19:00.864 { 00:19:00.864 "name": "BaseBdev1", 00:19:00.864 "uuid": "22802428-9820-4b41-beb3-ba39db2f47cc", 00:19:00.864 "is_configured": true, 00:19:00.864 "data_offset": 0, 00:19:00.864 "data_size": 65536 00:19:00.864 }, 00:19:00.864 { 00:19:00.864 "name": null, 00:19:00.864 "uuid": "66b7f0ff-7b24-47c2-8bf1-f4a05e6e3a4d", 00:19:00.864 "is_configured": false, 00:19:00.864 "data_offset": 0, 00:19:00.864 "data_size": 65536 00:19:00.864 }, 00:19:00.864 { 00:19:00.864 "name": "BaseBdev3", 00:19:00.864 "uuid": "a20fdf3f-a9cf-4bbf-9f4b-e8d72cddb0c4", 00:19:00.864 "is_configured": true, 00:19:00.864 "data_offset": 0, 00:19:00.864 "data_size": 65536 00:19:00.864 } 00:19:00.864 ] 00:19:00.864 }' 00:19:00.865 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:00.865 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.433 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.433 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.433 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.433 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:01.433 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.433 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:01.433 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:01.433 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.433 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.433 [2024-11-20 07:15:58.598148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:01.433 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.434 "name": "Existed_Raid", 00:19:01.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.434 "strip_size_kb": 64, 00:19:01.434 "state": "configuring", 00:19:01.434 "raid_level": "raid5f", 00:19:01.434 "superblock": false, 00:19:01.434 "num_base_bdevs": 3, 00:19:01.434 "num_base_bdevs_discovered": 1, 00:19:01.434 "num_base_bdevs_operational": 3, 00:19:01.434 "base_bdevs_list": [ 00:19:01.434 { 00:19:01.434 "name": null, 00:19:01.434 "uuid": "22802428-9820-4b41-beb3-ba39db2f47cc", 00:19:01.434 "is_configured": false, 00:19:01.434 "data_offset": 0, 00:19:01.434 "data_size": 65536 00:19:01.434 }, 00:19:01.434 { 00:19:01.434 "name": null, 00:19:01.434 "uuid": "66b7f0ff-7b24-47c2-8bf1-f4a05e6e3a4d", 00:19:01.434 "is_configured": false, 00:19:01.434 "data_offset": 0, 00:19:01.434 "data_size": 65536 00:19:01.434 }, 00:19:01.434 { 00:19:01.434 "name": "BaseBdev3", 00:19:01.434 "uuid": "a20fdf3f-a9cf-4bbf-9f4b-e8d72cddb0c4", 00:19:01.434 "is_configured": true, 00:19:01.434 "data_offset": 0, 00:19:01.434 "data_size": 65536 00:19:01.434 } 00:19:01.434 ] 00:19:01.434 }' 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.434 07:15:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.002 [2024-11-20 07:15:59.273897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.002 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.261 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.261 "name": "Existed_Raid", 00:19:02.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:02.261 "strip_size_kb": 64, 00:19:02.261 "state": "configuring", 00:19:02.261 "raid_level": "raid5f", 00:19:02.261 "superblock": false, 00:19:02.261 "num_base_bdevs": 3, 00:19:02.261 "num_base_bdevs_discovered": 2, 00:19:02.261 "num_base_bdevs_operational": 3, 00:19:02.261 "base_bdevs_list": [ 00:19:02.261 { 00:19:02.261 "name": null, 00:19:02.261 "uuid": "22802428-9820-4b41-beb3-ba39db2f47cc", 00:19:02.261 "is_configured": false, 00:19:02.261 "data_offset": 0, 00:19:02.261 "data_size": 65536 00:19:02.261 }, 00:19:02.261 { 00:19:02.261 "name": "BaseBdev2", 00:19:02.261 "uuid": "66b7f0ff-7b24-47c2-8bf1-f4a05e6e3a4d", 00:19:02.261 "is_configured": true, 00:19:02.261 "data_offset": 0, 00:19:02.261 "data_size": 65536 00:19:02.261 }, 00:19:02.261 { 00:19:02.261 "name": "BaseBdev3", 00:19:02.261 "uuid": "a20fdf3f-a9cf-4bbf-9f4b-e8d72cddb0c4", 00:19:02.261 "is_configured": true, 00:19:02.261 "data_offset": 0, 00:19:02.261 "data_size": 65536 00:19:02.261 } 00:19:02.261 ] 00:19:02.261 }' 00:19:02.261 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.261 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.520 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.520 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:02.520 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.520 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.520 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 22802428-9820-4b41-beb3-ba39db2f47cc 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.780 [2024-11-20 07:15:59.931985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:02.780 [2024-11-20 07:15:59.932268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:02.780 [2024-11-20 07:15:59.932299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:02.780 [2024-11-20 07:15:59.932614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:02.780 [2024-11-20 07:15:59.937594] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:02.780 [2024-11-20 07:15:59.937621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:02.780 [2024-11-20 07:15:59.937976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:02.780 NewBaseBdev 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.780 [ 00:19:02.780 { 00:19:02.780 "name": "NewBaseBdev", 00:19:02.780 "aliases": [ 00:19:02.780 "22802428-9820-4b41-beb3-ba39db2f47cc" 00:19:02.780 ], 00:19:02.780 "product_name": "Malloc disk", 00:19:02.780 "block_size": 512, 00:19:02.780 "num_blocks": 65536, 00:19:02.780 "uuid": "22802428-9820-4b41-beb3-ba39db2f47cc", 00:19:02.780 "assigned_rate_limits": { 00:19:02.780 "rw_ios_per_sec": 0, 00:19:02.780 "rw_mbytes_per_sec": 0, 00:19:02.780 "r_mbytes_per_sec": 0, 00:19:02.780 "w_mbytes_per_sec": 0 00:19:02.780 }, 00:19:02.780 "claimed": true, 00:19:02.780 "claim_type": "exclusive_write", 00:19:02.780 "zoned": false, 00:19:02.780 "supported_io_types": { 00:19:02.780 "read": true, 00:19:02.780 "write": true, 00:19:02.780 "unmap": true, 00:19:02.780 "flush": true, 00:19:02.780 "reset": true, 00:19:02.780 "nvme_admin": false, 00:19:02.780 "nvme_io": false, 00:19:02.780 "nvme_io_md": false, 00:19:02.780 "write_zeroes": true, 00:19:02.780 "zcopy": true, 00:19:02.780 "get_zone_info": false, 00:19:02.780 "zone_management": false, 00:19:02.780 "zone_append": false, 00:19:02.780 "compare": false, 00:19:02.780 "compare_and_write": false, 00:19:02.780 "abort": true, 00:19:02.780 "seek_hole": false, 00:19:02.780 "seek_data": false, 00:19:02.780 "copy": true, 00:19:02.780 "nvme_iov_md": false 00:19:02.780 }, 00:19:02.780 "memory_domains": [ 00:19:02.780 { 00:19:02.780 "dma_device_id": "system", 00:19:02.780 "dma_device_type": 1 00:19:02.780 }, 00:19:02.780 { 00:19:02.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:02.780 "dma_device_type": 2 00:19:02.780 } 00:19:02.780 ], 00:19:02.780 "driver_specific": {} 00:19:02.780 } 00:19:02.780 ] 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:02.780 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:02.781 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:02.781 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:02.781 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:02.781 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:02.781 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:02.781 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:02.781 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:02.781 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:02.781 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:02.781 07:15:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:02.781 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.781 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:02.781 07:15:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.781 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:02.781 "name": "Existed_Raid", 00:19:02.781 "uuid": "43932409-eb14-4553-89d7-09f86f9d7e25", 00:19:02.781 "strip_size_kb": 64, 00:19:02.781 "state": "online", 00:19:02.781 "raid_level": "raid5f", 00:19:02.781 "superblock": false, 00:19:02.781 "num_base_bdevs": 3, 00:19:02.781 "num_base_bdevs_discovered": 3, 00:19:02.781 "num_base_bdevs_operational": 3, 00:19:02.781 "base_bdevs_list": [ 00:19:02.781 { 00:19:02.781 "name": "NewBaseBdev", 00:19:02.781 "uuid": "22802428-9820-4b41-beb3-ba39db2f47cc", 00:19:02.781 "is_configured": true, 00:19:02.781 "data_offset": 0, 00:19:02.781 "data_size": 65536 00:19:02.781 }, 00:19:02.781 { 00:19:02.781 "name": "BaseBdev2", 00:19:02.781 "uuid": "66b7f0ff-7b24-47c2-8bf1-f4a05e6e3a4d", 00:19:02.781 "is_configured": true, 00:19:02.781 "data_offset": 0, 00:19:02.781 "data_size": 65536 00:19:02.781 }, 00:19:02.781 { 00:19:02.781 "name": "BaseBdev3", 00:19:02.781 "uuid": "a20fdf3f-a9cf-4bbf-9f4b-e8d72cddb0c4", 00:19:02.781 "is_configured": true, 00:19:02.781 "data_offset": 0, 00:19:02.781 "data_size": 65536 00:19:02.781 } 00:19:02.781 ] 00:19:02.781 }' 00:19:02.781 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:02.781 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:03.350 [2024-11-20 07:16:00.511953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:03.350 "name": "Existed_Raid", 00:19:03.350 "aliases": [ 00:19:03.350 "43932409-eb14-4553-89d7-09f86f9d7e25" 00:19:03.350 ], 00:19:03.350 "product_name": "Raid Volume", 00:19:03.350 "block_size": 512, 00:19:03.350 "num_blocks": 131072, 00:19:03.350 "uuid": "43932409-eb14-4553-89d7-09f86f9d7e25", 00:19:03.350 "assigned_rate_limits": { 00:19:03.350 "rw_ios_per_sec": 0, 00:19:03.350 "rw_mbytes_per_sec": 0, 00:19:03.350 "r_mbytes_per_sec": 0, 00:19:03.350 "w_mbytes_per_sec": 0 00:19:03.350 }, 00:19:03.350 "claimed": false, 00:19:03.350 "zoned": false, 00:19:03.350 "supported_io_types": { 00:19:03.350 "read": true, 00:19:03.350 "write": true, 00:19:03.350 "unmap": false, 00:19:03.350 "flush": false, 00:19:03.350 "reset": true, 00:19:03.350 "nvme_admin": false, 00:19:03.350 "nvme_io": false, 00:19:03.350 "nvme_io_md": false, 00:19:03.350 "write_zeroes": true, 00:19:03.350 "zcopy": false, 00:19:03.350 "get_zone_info": false, 00:19:03.350 "zone_management": false, 00:19:03.350 "zone_append": false, 00:19:03.350 "compare": false, 00:19:03.350 "compare_and_write": false, 00:19:03.350 "abort": false, 00:19:03.350 "seek_hole": false, 00:19:03.350 "seek_data": false, 00:19:03.350 "copy": false, 00:19:03.350 "nvme_iov_md": false 00:19:03.350 }, 00:19:03.350 "driver_specific": { 00:19:03.350 "raid": { 00:19:03.350 "uuid": "43932409-eb14-4553-89d7-09f86f9d7e25", 00:19:03.350 "strip_size_kb": 64, 00:19:03.350 "state": "online", 00:19:03.350 "raid_level": "raid5f", 00:19:03.350 "superblock": false, 00:19:03.350 "num_base_bdevs": 3, 00:19:03.350 "num_base_bdevs_discovered": 3, 00:19:03.350 "num_base_bdevs_operational": 3, 00:19:03.350 "base_bdevs_list": [ 00:19:03.350 { 00:19:03.350 "name": "NewBaseBdev", 00:19:03.350 "uuid": "22802428-9820-4b41-beb3-ba39db2f47cc", 00:19:03.350 "is_configured": true, 00:19:03.350 "data_offset": 0, 00:19:03.350 "data_size": 65536 00:19:03.350 }, 00:19:03.350 { 00:19:03.350 "name": "BaseBdev2", 00:19:03.350 "uuid": "66b7f0ff-7b24-47c2-8bf1-f4a05e6e3a4d", 00:19:03.350 "is_configured": true, 00:19:03.350 "data_offset": 0, 00:19:03.350 "data_size": 65536 00:19:03.350 }, 00:19:03.350 { 00:19:03.350 "name": "BaseBdev3", 00:19:03.350 "uuid": "a20fdf3f-a9cf-4bbf-9f4b-e8d72cddb0c4", 00:19:03.350 "is_configured": true, 00:19:03.350 "data_offset": 0, 00:19:03.350 "data_size": 65536 00:19:03.350 } 00:19:03.350 ] 00:19:03.350 } 00:19:03.350 } 00:19:03.350 }' 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:03.350 BaseBdev2 00:19:03.350 BaseBdev3' 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.350 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.610 [2024-11-20 07:16:00.823787] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:03.610 [2024-11-20 07:16:00.823950] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:03.610 [2024-11-20 07:16:00.824183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:03.610 [2024-11-20 07:16:00.824644] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:03.610 [2024-11-20 07:16:00.824781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80144 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80144 ']' 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80144 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80144 00:19:03.610 killing process with pid 80144 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80144' 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80144 00:19:03.610 [2024-11-20 07:16:00.864557] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:03.610 07:16:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80144 00:19:03.869 [2024-11-20 07:16:01.141447] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:19:05.244 00:19:05.244 real 0m11.958s 00:19:05.244 user 0m19.729s 00:19:05.244 sys 0m1.726s 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.244 ************************************ 00:19:05.244 END TEST raid5f_state_function_test 00:19:05.244 ************************************ 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:05.244 07:16:02 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:19:05.244 07:16:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:05.244 07:16:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.244 07:16:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:05.244 ************************************ 00:19:05.244 START TEST raid5f_state_function_test_sb 00:19:05.244 ************************************ 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:05.244 Process raid pid: 80775 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80775 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80775' 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80775 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80775 ']' 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.244 07:16:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.244 [2024-11-20 07:16:02.399883] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:19:05.244 [2024-11-20 07:16:02.400335] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.502 [2024-11-20 07:16:02.585047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.502 [2024-11-20 07:16:02.725639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.761 [2024-11-20 07:16:02.942772] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.761 [2024-11-20 07:16:02.943062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.327 [2024-11-20 07:16:03.429164] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:06.327 [2024-11-20 07:16:03.429358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:06.327 [2024-11-20 07:16:03.429387] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:06.327 [2024-11-20 07:16:03.429405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:06.327 [2024-11-20 07:16:03.429416] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:06.327 [2024-11-20 07:16:03.429430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.327 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.328 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.328 "name": "Existed_Raid", 00:19:06.328 "uuid": "3b8ed308-f99e-4bd1-9d3e-232e27245613", 00:19:06.328 "strip_size_kb": 64, 00:19:06.328 "state": "configuring", 00:19:06.328 "raid_level": "raid5f", 00:19:06.328 "superblock": true, 00:19:06.328 "num_base_bdevs": 3, 00:19:06.328 "num_base_bdevs_discovered": 0, 00:19:06.328 "num_base_bdevs_operational": 3, 00:19:06.328 "base_bdevs_list": [ 00:19:06.328 { 00:19:06.328 "name": "BaseBdev1", 00:19:06.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.328 "is_configured": false, 00:19:06.328 "data_offset": 0, 00:19:06.328 "data_size": 0 00:19:06.328 }, 00:19:06.328 { 00:19:06.328 "name": "BaseBdev2", 00:19:06.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.328 "is_configured": false, 00:19:06.328 "data_offset": 0, 00:19:06.328 "data_size": 0 00:19:06.328 }, 00:19:06.328 { 00:19:06.328 "name": "BaseBdev3", 00:19:06.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.328 "is_configured": false, 00:19:06.328 "data_offset": 0, 00:19:06.328 "data_size": 0 00:19:06.328 } 00:19:06.328 ] 00:19:06.328 }' 00:19:06.328 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.328 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.895 [2024-11-20 07:16:03.925286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:06.895 [2024-11-20 07:16:03.925342] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.895 [2024-11-20 07:16:03.933232] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:06.895 [2024-11-20 07:16:03.933489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:06.895 [2024-11-20 07:16:03.933609] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:06.895 [2024-11-20 07:16:03.933745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:06.895 [2024-11-20 07:16:03.933849] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:06.895 [2024-11-20 07:16:03.934048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.895 [2024-11-20 07:16:03.977730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:06.895 BaseBdev1 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.895 07:16:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.895 [ 00:19:06.895 { 00:19:06.895 "name": "BaseBdev1", 00:19:06.895 "aliases": [ 00:19:06.895 "89bf7f63-1a74-4b85-a927-fdae69bbe264" 00:19:06.895 ], 00:19:06.895 "product_name": "Malloc disk", 00:19:06.895 "block_size": 512, 00:19:06.895 "num_blocks": 65536, 00:19:06.895 "uuid": "89bf7f63-1a74-4b85-a927-fdae69bbe264", 00:19:06.895 "assigned_rate_limits": { 00:19:06.895 "rw_ios_per_sec": 0, 00:19:06.895 "rw_mbytes_per_sec": 0, 00:19:06.895 "r_mbytes_per_sec": 0, 00:19:06.895 "w_mbytes_per_sec": 0 00:19:06.895 }, 00:19:06.895 "claimed": true, 00:19:06.895 "claim_type": "exclusive_write", 00:19:06.895 "zoned": false, 00:19:06.895 "supported_io_types": { 00:19:06.895 "read": true, 00:19:06.895 "write": true, 00:19:06.895 "unmap": true, 00:19:06.895 "flush": true, 00:19:06.895 "reset": true, 00:19:06.895 "nvme_admin": false, 00:19:06.895 "nvme_io": false, 00:19:06.895 "nvme_io_md": false, 00:19:06.895 "write_zeroes": true, 00:19:06.895 "zcopy": true, 00:19:06.895 "get_zone_info": false, 00:19:06.896 "zone_management": false, 00:19:06.896 "zone_append": false, 00:19:06.896 "compare": false, 00:19:06.896 "compare_and_write": false, 00:19:06.896 "abort": true, 00:19:06.896 "seek_hole": false, 00:19:06.896 "seek_data": false, 00:19:06.896 "copy": true, 00:19:06.896 "nvme_iov_md": false 00:19:06.896 }, 00:19:06.896 "memory_domains": [ 00:19:06.896 { 00:19:06.896 "dma_device_id": "system", 00:19:06.896 "dma_device_type": 1 00:19:06.896 }, 00:19:06.896 { 00:19:06.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.896 "dma_device_type": 2 00:19:06.896 } 00:19:06.896 ], 00:19:06.896 "driver_specific": {} 00:19:06.896 } 00:19:06.896 ] 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.896 "name": "Existed_Raid", 00:19:06.896 "uuid": "bbcf42a5-c384-41ab-ba37-2a438409337f", 00:19:06.896 "strip_size_kb": 64, 00:19:06.896 "state": "configuring", 00:19:06.896 "raid_level": "raid5f", 00:19:06.896 "superblock": true, 00:19:06.896 "num_base_bdevs": 3, 00:19:06.896 "num_base_bdevs_discovered": 1, 00:19:06.896 "num_base_bdevs_operational": 3, 00:19:06.896 "base_bdevs_list": [ 00:19:06.896 { 00:19:06.896 "name": "BaseBdev1", 00:19:06.896 "uuid": "89bf7f63-1a74-4b85-a927-fdae69bbe264", 00:19:06.896 "is_configured": true, 00:19:06.896 "data_offset": 2048, 00:19:06.896 "data_size": 63488 00:19:06.896 }, 00:19:06.896 { 00:19:06.896 "name": "BaseBdev2", 00:19:06.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.896 "is_configured": false, 00:19:06.896 "data_offset": 0, 00:19:06.896 "data_size": 0 00:19:06.896 }, 00:19:06.896 { 00:19:06.896 "name": "BaseBdev3", 00:19:06.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.896 "is_configured": false, 00:19:06.896 "data_offset": 0, 00:19:06.896 "data_size": 0 00:19:06.896 } 00:19:06.896 ] 00:19:06.896 }' 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.896 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.459 [2024-11-20 07:16:04.505956] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:07.459 [2024-11-20 07:16:04.506151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.459 [2024-11-20 07:16:04.514011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:07.459 [2024-11-20 07:16:04.516668] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:07.459 [2024-11-20 07:16:04.516880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:07.459 [2024-11-20 07:16:04.517001] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:07.459 [2024-11-20 07:16:04.517157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:07.459 "name": "Existed_Raid", 00:19:07.459 "uuid": "1a1abc57-e5d4-4847-9901-8798140204ef", 00:19:07.459 "strip_size_kb": 64, 00:19:07.459 "state": "configuring", 00:19:07.459 "raid_level": "raid5f", 00:19:07.459 "superblock": true, 00:19:07.459 "num_base_bdevs": 3, 00:19:07.459 "num_base_bdevs_discovered": 1, 00:19:07.459 "num_base_bdevs_operational": 3, 00:19:07.459 "base_bdevs_list": [ 00:19:07.459 { 00:19:07.459 "name": "BaseBdev1", 00:19:07.459 "uuid": "89bf7f63-1a74-4b85-a927-fdae69bbe264", 00:19:07.459 "is_configured": true, 00:19:07.459 "data_offset": 2048, 00:19:07.459 "data_size": 63488 00:19:07.459 }, 00:19:07.459 { 00:19:07.459 "name": "BaseBdev2", 00:19:07.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.459 "is_configured": false, 00:19:07.459 "data_offset": 0, 00:19:07.459 "data_size": 0 00:19:07.459 }, 00:19:07.459 { 00:19:07.459 "name": "BaseBdev3", 00:19:07.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.459 "is_configured": false, 00:19:07.459 "data_offset": 0, 00:19:07.459 "data_size": 0 00:19:07.459 } 00:19:07.459 ] 00:19:07.459 }' 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:07.459 07:16:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.022 BaseBdev2 00:19:08.022 [2024-11-20 07:16:05.123431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.022 [ 00:19:08.022 { 00:19:08.022 "name": "BaseBdev2", 00:19:08.022 "aliases": [ 00:19:08.022 "281a9aad-6b0e-4fe5-afe3-4e8acad1ad34" 00:19:08.022 ], 00:19:08.022 "product_name": "Malloc disk", 00:19:08.022 "block_size": 512, 00:19:08.022 "num_blocks": 65536, 00:19:08.022 "uuid": "281a9aad-6b0e-4fe5-afe3-4e8acad1ad34", 00:19:08.022 "assigned_rate_limits": { 00:19:08.022 "rw_ios_per_sec": 0, 00:19:08.022 "rw_mbytes_per_sec": 0, 00:19:08.022 "r_mbytes_per_sec": 0, 00:19:08.022 "w_mbytes_per_sec": 0 00:19:08.022 }, 00:19:08.022 "claimed": true, 00:19:08.022 "claim_type": "exclusive_write", 00:19:08.022 "zoned": false, 00:19:08.022 "supported_io_types": { 00:19:08.022 "read": true, 00:19:08.022 "write": true, 00:19:08.022 "unmap": true, 00:19:08.022 "flush": true, 00:19:08.022 "reset": true, 00:19:08.022 "nvme_admin": false, 00:19:08.022 "nvme_io": false, 00:19:08.022 "nvme_io_md": false, 00:19:08.022 "write_zeroes": true, 00:19:08.022 "zcopy": true, 00:19:08.022 "get_zone_info": false, 00:19:08.022 "zone_management": false, 00:19:08.022 "zone_append": false, 00:19:08.022 "compare": false, 00:19:08.022 "compare_and_write": false, 00:19:08.022 "abort": true, 00:19:08.022 "seek_hole": false, 00:19:08.022 "seek_data": false, 00:19:08.022 "copy": true, 00:19:08.022 "nvme_iov_md": false 00:19:08.022 }, 00:19:08.022 "memory_domains": [ 00:19:08.022 { 00:19:08.022 "dma_device_id": "system", 00:19:08.022 "dma_device_type": 1 00:19:08.022 }, 00:19:08.022 { 00:19:08.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.022 "dma_device_type": 2 00:19:08.022 } 00:19:08.022 ], 00:19:08.022 "driver_specific": {} 00:19:08.022 } 00:19:08.022 ] 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.022 "name": "Existed_Raid", 00:19:08.022 "uuid": "1a1abc57-e5d4-4847-9901-8798140204ef", 00:19:08.022 "strip_size_kb": 64, 00:19:08.022 "state": "configuring", 00:19:08.022 "raid_level": "raid5f", 00:19:08.022 "superblock": true, 00:19:08.022 "num_base_bdevs": 3, 00:19:08.022 "num_base_bdevs_discovered": 2, 00:19:08.022 "num_base_bdevs_operational": 3, 00:19:08.022 "base_bdevs_list": [ 00:19:08.022 { 00:19:08.022 "name": "BaseBdev1", 00:19:08.022 "uuid": "89bf7f63-1a74-4b85-a927-fdae69bbe264", 00:19:08.022 "is_configured": true, 00:19:08.022 "data_offset": 2048, 00:19:08.022 "data_size": 63488 00:19:08.022 }, 00:19:08.022 { 00:19:08.022 "name": "BaseBdev2", 00:19:08.022 "uuid": "281a9aad-6b0e-4fe5-afe3-4e8acad1ad34", 00:19:08.022 "is_configured": true, 00:19:08.022 "data_offset": 2048, 00:19:08.022 "data_size": 63488 00:19:08.022 }, 00:19:08.022 { 00:19:08.022 "name": "BaseBdev3", 00:19:08.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.022 "is_configured": false, 00:19:08.022 "data_offset": 0, 00:19:08.022 "data_size": 0 00:19:08.022 } 00:19:08.022 ] 00:19:08.022 }' 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.022 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.589 [2024-11-20 07:16:05.723005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:08.589 [2024-11-20 07:16:05.723542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:08.589 BaseBdev3 00:19:08.589 [2024-11-20 07:16:05.723698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:08.589 [2024-11-20 07:16:05.724246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.589 [2024-11-20 07:16:05.729529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:08.589 [2024-11-20 07:16:05.729680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:08.589 [2024-11-20 07:16:05.730168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.589 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.589 [ 00:19:08.589 { 00:19:08.589 "name": "BaseBdev3", 00:19:08.589 "aliases": [ 00:19:08.589 "517b86bc-e8f0-4f96-9bd5-913e2cf05e0a" 00:19:08.589 ], 00:19:08.589 "product_name": "Malloc disk", 00:19:08.589 "block_size": 512, 00:19:08.589 "num_blocks": 65536, 00:19:08.589 "uuid": "517b86bc-e8f0-4f96-9bd5-913e2cf05e0a", 00:19:08.589 "assigned_rate_limits": { 00:19:08.589 "rw_ios_per_sec": 0, 00:19:08.589 "rw_mbytes_per_sec": 0, 00:19:08.589 "r_mbytes_per_sec": 0, 00:19:08.589 "w_mbytes_per_sec": 0 00:19:08.589 }, 00:19:08.589 "claimed": true, 00:19:08.589 "claim_type": "exclusive_write", 00:19:08.589 "zoned": false, 00:19:08.589 "supported_io_types": { 00:19:08.589 "read": true, 00:19:08.589 "write": true, 00:19:08.589 "unmap": true, 00:19:08.589 "flush": true, 00:19:08.589 "reset": true, 00:19:08.589 "nvme_admin": false, 00:19:08.589 "nvme_io": false, 00:19:08.589 "nvme_io_md": false, 00:19:08.589 "write_zeroes": true, 00:19:08.589 "zcopy": true, 00:19:08.589 "get_zone_info": false, 00:19:08.589 "zone_management": false, 00:19:08.589 "zone_append": false, 00:19:08.589 "compare": false, 00:19:08.589 "compare_and_write": false, 00:19:08.589 "abort": true, 00:19:08.589 "seek_hole": false, 00:19:08.589 "seek_data": false, 00:19:08.589 "copy": true, 00:19:08.589 "nvme_iov_md": false 00:19:08.589 }, 00:19:08.589 "memory_domains": [ 00:19:08.589 { 00:19:08.589 "dma_device_id": "system", 00:19:08.589 "dma_device_type": 1 00:19:08.589 }, 00:19:08.589 { 00:19:08.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:08.590 "dma_device_type": 2 00:19:08.590 } 00:19:08.590 ], 00:19:08.590 "driver_specific": {} 00:19:08.590 } 00:19:08.590 ] 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.590 "name": "Existed_Raid", 00:19:08.590 "uuid": "1a1abc57-e5d4-4847-9901-8798140204ef", 00:19:08.590 "strip_size_kb": 64, 00:19:08.590 "state": "online", 00:19:08.590 "raid_level": "raid5f", 00:19:08.590 "superblock": true, 00:19:08.590 "num_base_bdevs": 3, 00:19:08.590 "num_base_bdevs_discovered": 3, 00:19:08.590 "num_base_bdevs_operational": 3, 00:19:08.590 "base_bdevs_list": [ 00:19:08.590 { 00:19:08.590 "name": "BaseBdev1", 00:19:08.590 "uuid": "89bf7f63-1a74-4b85-a927-fdae69bbe264", 00:19:08.590 "is_configured": true, 00:19:08.590 "data_offset": 2048, 00:19:08.590 "data_size": 63488 00:19:08.590 }, 00:19:08.590 { 00:19:08.590 "name": "BaseBdev2", 00:19:08.590 "uuid": "281a9aad-6b0e-4fe5-afe3-4e8acad1ad34", 00:19:08.590 "is_configured": true, 00:19:08.590 "data_offset": 2048, 00:19:08.590 "data_size": 63488 00:19:08.590 }, 00:19:08.590 { 00:19:08.590 "name": "BaseBdev3", 00:19:08.590 "uuid": "517b86bc-e8f0-4f96-9bd5-913e2cf05e0a", 00:19:08.590 "is_configured": true, 00:19:08.590 "data_offset": 2048, 00:19:08.590 "data_size": 63488 00:19:08.590 } 00:19:08.590 ] 00:19:08.590 }' 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.590 07:16:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.233 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:09.233 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:09.234 [2024-11-20 07:16:06.324393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:09.234 "name": "Existed_Raid", 00:19:09.234 "aliases": [ 00:19:09.234 "1a1abc57-e5d4-4847-9901-8798140204ef" 00:19:09.234 ], 00:19:09.234 "product_name": "Raid Volume", 00:19:09.234 "block_size": 512, 00:19:09.234 "num_blocks": 126976, 00:19:09.234 "uuid": "1a1abc57-e5d4-4847-9901-8798140204ef", 00:19:09.234 "assigned_rate_limits": { 00:19:09.234 "rw_ios_per_sec": 0, 00:19:09.234 "rw_mbytes_per_sec": 0, 00:19:09.234 "r_mbytes_per_sec": 0, 00:19:09.234 "w_mbytes_per_sec": 0 00:19:09.234 }, 00:19:09.234 "claimed": false, 00:19:09.234 "zoned": false, 00:19:09.234 "supported_io_types": { 00:19:09.234 "read": true, 00:19:09.234 "write": true, 00:19:09.234 "unmap": false, 00:19:09.234 "flush": false, 00:19:09.234 "reset": true, 00:19:09.234 "nvme_admin": false, 00:19:09.234 "nvme_io": false, 00:19:09.234 "nvme_io_md": false, 00:19:09.234 "write_zeroes": true, 00:19:09.234 "zcopy": false, 00:19:09.234 "get_zone_info": false, 00:19:09.234 "zone_management": false, 00:19:09.234 "zone_append": false, 00:19:09.234 "compare": false, 00:19:09.234 "compare_and_write": false, 00:19:09.234 "abort": false, 00:19:09.234 "seek_hole": false, 00:19:09.234 "seek_data": false, 00:19:09.234 "copy": false, 00:19:09.234 "nvme_iov_md": false 00:19:09.234 }, 00:19:09.234 "driver_specific": { 00:19:09.234 "raid": { 00:19:09.234 "uuid": "1a1abc57-e5d4-4847-9901-8798140204ef", 00:19:09.234 "strip_size_kb": 64, 00:19:09.234 "state": "online", 00:19:09.234 "raid_level": "raid5f", 00:19:09.234 "superblock": true, 00:19:09.234 "num_base_bdevs": 3, 00:19:09.234 "num_base_bdevs_discovered": 3, 00:19:09.234 "num_base_bdevs_operational": 3, 00:19:09.234 "base_bdevs_list": [ 00:19:09.234 { 00:19:09.234 "name": "BaseBdev1", 00:19:09.234 "uuid": "89bf7f63-1a74-4b85-a927-fdae69bbe264", 00:19:09.234 "is_configured": true, 00:19:09.234 "data_offset": 2048, 00:19:09.234 "data_size": 63488 00:19:09.234 }, 00:19:09.234 { 00:19:09.234 "name": "BaseBdev2", 00:19:09.234 "uuid": "281a9aad-6b0e-4fe5-afe3-4e8acad1ad34", 00:19:09.234 "is_configured": true, 00:19:09.234 "data_offset": 2048, 00:19:09.234 "data_size": 63488 00:19:09.234 }, 00:19:09.234 { 00:19:09.234 "name": "BaseBdev3", 00:19:09.234 "uuid": "517b86bc-e8f0-4f96-9bd5-913e2cf05e0a", 00:19:09.234 "is_configured": true, 00:19:09.234 "data_offset": 2048, 00:19:09.234 "data_size": 63488 00:19:09.234 } 00:19:09.234 ] 00:19:09.234 } 00:19:09.234 } 00:19:09.234 }' 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:09.234 BaseBdev2 00:19:09.234 BaseBdev3' 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.234 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.493 [2024-11-20 07:16:06.656227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.493 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.494 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.494 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.494 "name": "Existed_Raid", 00:19:09.494 "uuid": "1a1abc57-e5d4-4847-9901-8798140204ef", 00:19:09.494 "strip_size_kb": 64, 00:19:09.494 "state": "online", 00:19:09.494 "raid_level": "raid5f", 00:19:09.494 "superblock": true, 00:19:09.494 "num_base_bdevs": 3, 00:19:09.494 "num_base_bdevs_discovered": 2, 00:19:09.494 "num_base_bdevs_operational": 2, 00:19:09.494 "base_bdevs_list": [ 00:19:09.494 { 00:19:09.494 "name": null, 00:19:09.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.494 "is_configured": false, 00:19:09.494 "data_offset": 0, 00:19:09.494 "data_size": 63488 00:19:09.494 }, 00:19:09.494 { 00:19:09.494 "name": "BaseBdev2", 00:19:09.494 "uuid": "281a9aad-6b0e-4fe5-afe3-4e8acad1ad34", 00:19:09.494 "is_configured": true, 00:19:09.494 "data_offset": 2048, 00:19:09.494 "data_size": 63488 00:19:09.494 }, 00:19:09.494 { 00:19:09.494 "name": "BaseBdev3", 00:19:09.494 "uuid": "517b86bc-e8f0-4f96-9bd5-913e2cf05e0a", 00:19:09.494 "is_configured": true, 00:19:09.494 "data_offset": 2048, 00:19:09.494 "data_size": 63488 00:19:09.494 } 00:19:09.494 ] 00:19:09.494 }' 00:19:09.494 07:16:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.494 07:16:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.061 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:10.061 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:10.061 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.061 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:10.061 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.061 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.061 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.061 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:10.061 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:10.061 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:10.061 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.061 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.320 [2024-11-20 07:16:07.383229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:10.320 [2024-11-20 07:16:07.383589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:10.320 [2024-11-20 07:16:07.470324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.320 [2024-11-20 07:16:07.542378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:10.320 [2024-11-20 07:16:07.542555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:10.320 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.579 BaseBdev2 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:10.579 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.580 [ 00:19:10.580 { 00:19:10.580 "name": "BaseBdev2", 00:19:10.580 "aliases": [ 00:19:10.580 "f966fb3d-e631-4989-837d-8a12bef950a2" 00:19:10.580 ], 00:19:10.580 "product_name": "Malloc disk", 00:19:10.580 "block_size": 512, 00:19:10.580 "num_blocks": 65536, 00:19:10.580 "uuid": "f966fb3d-e631-4989-837d-8a12bef950a2", 00:19:10.580 "assigned_rate_limits": { 00:19:10.580 "rw_ios_per_sec": 0, 00:19:10.580 "rw_mbytes_per_sec": 0, 00:19:10.580 "r_mbytes_per_sec": 0, 00:19:10.580 "w_mbytes_per_sec": 0 00:19:10.580 }, 00:19:10.580 "claimed": false, 00:19:10.580 "zoned": false, 00:19:10.580 "supported_io_types": { 00:19:10.580 "read": true, 00:19:10.580 "write": true, 00:19:10.580 "unmap": true, 00:19:10.580 "flush": true, 00:19:10.580 "reset": true, 00:19:10.580 "nvme_admin": false, 00:19:10.580 "nvme_io": false, 00:19:10.580 "nvme_io_md": false, 00:19:10.580 "write_zeroes": true, 00:19:10.580 "zcopy": true, 00:19:10.580 "get_zone_info": false, 00:19:10.580 "zone_management": false, 00:19:10.580 "zone_append": false, 00:19:10.580 "compare": false, 00:19:10.580 "compare_and_write": false, 00:19:10.580 "abort": true, 00:19:10.580 "seek_hole": false, 00:19:10.580 "seek_data": false, 00:19:10.580 "copy": true, 00:19:10.580 "nvme_iov_md": false 00:19:10.580 }, 00:19:10.580 "memory_domains": [ 00:19:10.580 { 00:19:10.580 "dma_device_id": "system", 00:19:10.580 "dma_device_type": 1 00:19:10.580 }, 00:19:10.580 { 00:19:10.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.580 "dma_device_type": 2 00:19:10.580 } 00:19:10.580 ], 00:19:10.580 "driver_specific": {} 00:19:10.580 } 00:19:10.580 ] 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.580 BaseBdev3 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.580 [ 00:19:10.580 { 00:19:10.580 "name": "BaseBdev3", 00:19:10.580 "aliases": [ 00:19:10.580 "4b56c93f-b713-4b53-8122-03c35a8bfb6a" 00:19:10.580 ], 00:19:10.580 "product_name": "Malloc disk", 00:19:10.580 "block_size": 512, 00:19:10.580 "num_blocks": 65536, 00:19:10.580 "uuid": "4b56c93f-b713-4b53-8122-03c35a8bfb6a", 00:19:10.580 "assigned_rate_limits": { 00:19:10.580 "rw_ios_per_sec": 0, 00:19:10.580 "rw_mbytes_per_sec": 0, 00:19:10.580 "r_mbytes_per_sec": 0, 00:19:10.580 "w_mbytes_per_sec": 0 00:19:10.580 }, 00:19:10.580 "claimed": false, 00:19:10.580 "zoned": false, 00:19:10.580 "supported_io_types": { 00:19:10.580 "read": true, 00:19:10.580 "write": true, 00:19:10.580 "unmap": true, 00:19:10.580 "flush": true, 00:19:10.580 "reset": true, 00:19:10.580 "nvme_admin": false, 00:19:10.580 "nvme_io": false, 00:19:10.580 "nvme_io_md": false, 00:19:10.580 "write_zeroes": true, 00:19:10.580 "zcopy": true, 00:19:10.580 "get_zone_info": false, 00:19:10.580 "zone_management": false, 00:19:10.580 "zone_append": false, 00:19:10.580 "compare": false, 00:19:10.580 "compare_and_write": false, 00:19:10.580 "abort": true, 00:19:10.580 "seek_hole": false, 00:19:10.580 "seek_data": false, 00:19:10.580 "copy": true, 00:19:10.580 "nvme_iov_md": false 00:19:10.580 }, 00:19:10.580 "memory_domains": [ 00:19:10.580 { 00:19:10.580 "dma_device_id": "system", 00:19:10.580 "dma_device_type": 1 00:19:10.580 }, 00:19:10.580 { 00:19:10.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:10.580 "dma_device_type": 2 00:19:10.580 } 00:19:10.580 ], 00:19:10.580 "driver_specific": {} 00:19:10.580 } 00:19:10.580 ] 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.580 [2024-11-20 07:16:07.842934] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:10.580 [2024-11-20 07:16:07.843159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:10.580 [2024-11-20 07:16:07.843205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:10.580 [2024-11-20 07:16:07.845750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.580 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.839 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:10.839 "name": "Existed_Raid", 00:19:10.839 "uuid": "c959680a-ec75-483c-8cf5-9792c07c9af3", 00:19:10.839 "strip_size_kb": 64, 00:19:10.839 "state": "configuring", 00:19:10.839 "raid_level": "raid5f", 00:19:10.839 "superblock": true, 00:19:10.839 "num_base_bdevs": 3, 00:19:10.839 "num_base_bdevs_discovered": 2, 00:19:10.839 "num_base_bdevs_operational": 3, 00:19:10.839 "base_bdevs_list": [ 00:19:10.839 { 00:19:10.839 "name": "BaseBdev1", 00:19:10.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.839 "is_configured": false, 00:19:10.839 "data_offset": 0, 00:19:10.839 "data_size": 0 00:19:10.839 }, 00:19:10.839 { 00:19:10.839 "name": "BaseBdev2", 00:19:10.839 "uuid": "f966fb3d-e631-4989-837d-8a12bef950a2", 00:19:10.839 "is_configured": true, 00:19:10.839 "data_offset": 2048, 00:19:10.839 "data_size": 63488 00:19:10.839 }, 00:19:10.839 { 00:19:10.839 "name": "BaseBdev3", 00:19:10.839 "uuid": "4b56c93f-b713-4b53-8122-03c35a8bfb6a", 00:19:10.839 "is_configured": true, 00:19:10.839 "data_offset": 2048, 00:19:10.839 "data_size": 63488 00:19:10.839 } 00:19:10.839 ] 00:19:10.839 }' 00:19:10.839 07:16:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:10.839 07:16:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.097 [2024-11-20 07:16:08.363194] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.097 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.354 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.354 "name": "Existed_Raid", 00:19:11.354 "uuid": "c959680a-ec75-483c-8cf5-9792c07c9af3", 00:19:11.354 "strip_size_kb": 64, 00:19:11.354 "state": "configuring", 00:19:11.354 "raid_level": "raid5f", 00:19:11.354 "superblock": true, 00:19:11.354 "num_base_bdevs": 3, 00:19:11.354 "num_base_bdevs_discovered": 1, 00:19:11.354 "num_base_bdevs_operational": 3, 00:19:11.354 "base_bdevs_list": [ 00:19:11.354 { 00:19:11.354 "name": "BaseBdev1", 00:19:11.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.354 "is_configured": false, 00:19:11.354 "data_offset": 0, 00:19:11.354 "data_size": 0 00:19:11.354 }, 00:19:11.354 { 00:19:11.354 "name": null, 00:19:11.354 "uuid": "f966fb3d-e631-4989-837d-8a12bef950a2", 00:19:11.354 "is_configured": false, 00:19:11.354 "data_offset": 0, 00:19:11.354 "data_size": 63488 00:19:11.354 }, 00:19:11.354 { 00:19:11.354 "name": "BaseBdev3", 00:19:11.354 "uuid": "4b56c93f-b713-4b53-8122-03c35a8bfb6a", 00:19:11.354 "is_configured": true, 00:19:11.354 "data_offset": 2048, 00:19:11.354 "data_size": 63488 00:19:11.354 } 00:19:11.354 ] 00:19:11.354 }' 00:19:11.354 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.354 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.611 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.611 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:11.611 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.611 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.611 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.869 [2024-11-20 07:16:08.994808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.869 BaseBdev1 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.869 07:16:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.869 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.869 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:11.869 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.869 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.869 [ 00:19:11.869 { 00:19:11.869 "name": "BaseBdev1", 00:19:11.869 "aliases": [ 00:19:11.869 "90eda4b9-44d7-4c0d-a842-2439f42645af" 00:19:11.869 ], 00:19:11.869 "product_name": "Malloc disk", 00:19:11.869 "block_size": 512, 00:19:11.869 "num_blocks": 65536, 00:19:11.869 "uuid": "90eda4b9-44d7-4c0d-a842-2439f42645af", 00:19:11.869 "assigned_rate_limits": { 00:19:11.869 "rw_ios_per_sec": 0, 00:19:11.870 "rw_mbytes_per_sec": 0, 00:19:11.870 "r_mbytes_per_sec": 0, 00:19:11.870 "w_mbytes_per_sec": 0 00:19:11.870 }, 00:19:11.870 "claimed": true, 00:19:11.870 "claim_type": "exclusive_write", 00:19:11.870 "zoned": false, 00:19:11.870 "supported_io_types": { 00:19:11.870 "read": true, 00:19:11.870 "write": true, 00:19:11.870 "unmap": true, 00:19:11.870 "flush": true, 00:19:11.870 "reset": true, 00:19:11.870 "nvme_admin": false, 00:19:11.870 "nvme_io": false, 00:19:11.870 "nvme_io_md": false, 00:19:11.870 "write_zeroes": true, 00:19:11.870 "zcopy": true, 00:19:11.870 "get_zone_info": false, 00:19:11.870 "zone_management": false, 00:19:11.870 "zone_append": false, 00:19:11.870 "compare": false, 00:19:11.870 "compare_and_write": false, 00:19:11.870 "abort": true, 00:19:11.870 "seek_hole": false, 00:19:11.870 "seek_data": false, 00:19:11.870 "copy": true, 00:19:11.870 "nvme_iov_md": false 00:19:11.870 }, 00:19:11.870 "memory_domains": [ 00:19:11.870 { 00:19:11.870 "dma_device_id": "system", 00:19:11.870 "dma_device_type": 1 00:19:11.870 }, 00:19:11.870 { 00:19:11.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.870 "dma_device_type": 2 00:19:11.870 } 00:19:11.870 ], 00:19:11.870 "driver_specific": {} 00:19:11.870 } 00:19:11.870 ] 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.870 "name": "Existed_Raid", 00:19:11.870 "uuid": "c959680a-ec75-483c-8cf5-9792c07c9af3", 00:19:11.870 "strip_size_kb": 64, 00:19:11.870 "state": "configuring", 00:19:11.870 "raid_level": "raid5f", 00:19:11.870 "superblock": true, 00:19:11.870 "num_base_bdevs": 3, 00:19:11.870 "num_base_bdevs_discovered": 2, 00:19:11.870 "num_base_bdevs_operational": 3, 00:19:11.870 "base_bdevs_list": [ 00:19:11.870 { 00:19:11.870 "name": "BaseBdev1", 00:19:11.870 "uuid": "90eda4b9-44d7-4c0d-a842-2439f42645af", 00:19:11.870 "is_configured": true, 00:19:11.870 "data_offset": 2048, 00:19:11.870 "data_size": 63488 00:19:11.870 }, 00:19:11.870 { 00:19:11.870 "name": null, 00:19:11.870 "uuid": "f966fb3d-e631-4989-837d-8a12bef950a2", 00:19:11.870 "is_configured": false, 00:19:11.870 "data_offset": 0, 00:19:11.870 "data_size": 63488 00:19:11.870 }, 00:19:11.870 { 00:19:11.870 "name": "BaseBdev3", 00:19:11.870 "uuid": "4b56c93f-b713-4b53-8122-03c35a8bfb6a", 00:19:11.870 "is_configured": true, 00:19:11.870 "data_offset": 2048, 00:19:11.870 "data_size": 63488 00:19:11.870 } 00:19:11.870 ] 00:19:11.870 }' 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.870 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.480 [2024-11-20 07:16:09.615113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.480 "name": "Existed_Raid", 00:19:12.480 "uuid": "c959680a-ec75-483c-8cf5-9792c07c9af3", 00:19:12.480 "strip_size_kb": 64, 00:19:12.480 "state": "configuring", 00:19:12.480 "raid_level": "raid5f", 00:19:12.480 "superblock": true, 00:19:12.480 "num_base_bdevs": 3, 00:19:12.480 "num_base_bdevs_discovered": 1, 00:19:12.480 "num_base_bdevs_operational": 3, 00:19:12.480 "base_bdevs_list": [ 00:19:12.480 { 00:19:12.480 "name": "BaseBdev1", 00:19:12.480 "uuid": "90eda4b9-44d7-4c0d-a842-2439f42645af", 00:19:12.480 "is_configured": true, 00:19:12.480 "data_offset": 2048, 00:19:12.480 "data_size": 63488 00:19:12.480 }, 00:19:12.480 { 00:19:12.480 "name": null, 00:19:12.480 "uuid": "f966fb3d-e631-4989-837d-8a12bef950a2", 00:19:12.480 "is_configured": false, 00:19:12.480 "data_offset": 0, 00:19:12.480 "data_size": 63488 00:19:12.480 }, 00:19:12.480 { 00:19:12.480 "name": null, 00:19:12.480 "uuid": "4b56c93f-b713-4b53-8122-03c35a8bfb6a", 00:19:12.480 "is_configured": false, 00:19:12.480 "data_offset": 0, 00:19:12.480 "data_size": 63488 00:19:12.480 } 00:19:12.480 ] 00:19:12.480 }' 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.480 07:16:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.054 [2024-11-20 07:16:10.207381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.054 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.054 "name": "Existed_Raid", 00:19:13.054 "uuid": "c959680a-ec75-483c-8cf5-9792c07c9af3", 00:19:13.054 "strip_size_kb": 64, 00:19:13.054 "state": "configuring", 00:19:13.054 "raid_level": "raid5f", 00:19:13.054 "superblock": true, 00:19:13.054 "num_base_bdevs": 3, 00:19:13.054 "num_base_bdevs_discovered": 2, 00:19:13.054 "num_base_bdevs_operational": 3, 00:19:13.054 "base_bdevs_list": [ 00:19:13.054 { 00:19:13.054 "name": "BaseBdev1", 00:19:13.054 "uuid": "90eda4b9-44d7-4c0d-a842-2439f42645af", 00:19:13.054 "is_configured": true, 00:19:13.054 "data_offset": 2048, 00:19:13.054 "data_size": 63488 00:19:13.054 }, 00:19:13.054 { 00:19:13.054 "name": null, 00:19:13.055 "uuid": "f966fb3d-e631-4989-837d-8a12bef950a2", 00:19:13.055 "is_configured": false, 00:19:13.055 "data_offset": 0, 00:19:13.055 "data_size": 63488 00:19:13.055 }, 00:19:13.055 { 00:19:13.055 "name": "BaseBdev3", 00:19:13.055 "uuid": "4b56c93f-b713-4b53-8122-03c35a8bfb6a", 00:19:13.055 "is_configured": true, 00:19:13.055 "data_offset": 2048, 00:19:13.055 "data_size": 63488 00:19:13.055 } 00:19:13.055 ] 00:19:13.055 }' 00:19:13.055 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.055 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 [2024-11-20 07:16:10.795636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.631 "name": "Existed_Raid", 00:19:13.631 "uuid": "c959680a-ec75-483c-8cf5-9792c07c9af3", 00:19:13.631 "strip_size_kb": 64, 00:19:13.631 "state": "configuring", 00:19:13.631 "raid_level": "raid5f", 00:19:13.631 "superblock": true, 00:19:13.631 "num_base_bdevs": 3, 00:19:13.631 "num_base_bdevs_discovered": 1, 00:19:13.631 "num_base_bdevs_operational": 3, 00:19:13.631 "base_bdevs_list": [ 00:19:13.631 { 00:19:13.631 "name": null, 00:19:13.631 "uuid": "90eda4b9-44d7-4c0d-a842-2439f42645af", 00:19:13.631 "is_configured": false, 00:19:13.631 "data_offset": 0, 00:19:13.631 "data_size": 63488 00:19:13.631 }, 00:19:13.631 { 00:19:13.631 "name": null, 00:19:13.631 "uuid": "f966fb3d-e631-4989-837d-8a12bef950a2", 00:19:13.631 "is_configured": false, 00:19:13.631 "data_offset": 0, 00:19:13.631 "data_size": 63488 00:19:13.631 }, 00:19:13.631 { 00:19:13.631 "name": "BaseBdev3", 00:19:13.631 "uuid": "4b56c93f-b713-4b53-8122-03c35a8bfb6a", 00:19:13.631 "is_configured": true, 00:19:13.631 "data_offset": 2048, 00:19:13.631 "data_size": 63488 00:19:13.631 } 00:19:13.631 ] 00:19:13.631 }' 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.631 07:16:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.199 [2024-11-20 07:16:11.467614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.199 07:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.459 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.459 "name": "Existed_Raid", 00:19:14.459 "uuid": "c959680a-ec75-483c-8cf5-9792c07c9af3", 00:19:14.459 "strip_size_kb": 64, 00:19:14.459 "state": "configuring", 00:19:14.459 "raid_level": "raid5f", 00:19:14.459 "superblock": true, 00:19:14.459 "num_base_bdevs": 3, 00:19:14.459 "num_base_bdevs_discovered": 2, 00:19:14.459 "num_base_bdevs_operational": 3, 00:19:14.459 "base_bdevs_list": [ 00:19:14.459 { 00:19:14.459 "name": null, 00:19:14.459 "uuid": "90eda4b9-44d7-4c0d-a842-2439f42645af", 00:19:14.459 "is_configured": false, 00:19:14.459 "data_offset": 0, 00:19:14.459 "data_size": 63488 00:19:14.459 }, 00:19:14.459 { 00:19:14.459 "name": "BaseBdev2", 00:19:14.459 "uuid": "f966fb3d-e631-4989-837d-8a12bef950a2", 00:19:14.459 "is_configured": true, 00:19:14.459 "data_offset": 2048, 00:19:14.459 "data_size": 63488 00:19:14.459 }, 00:19:14.459 { 00:19:14.459 "name": "BaseBdev3", 00:19:14.459 "uuid": "4b56c93f-b713-4b53-8122-03c35a8bfb6a", 00:19:14.459 "is_configured": true, 00:19:14.459 "data_offset": 2048, 00:19:14.459 "data_size": 63488 00:19:14.459 } 00:19:14.459 ] 00:19:14.459 }' 00:19:14.459 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.459 07:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.720 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.720 07:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.720 07:16:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.720 07:16:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:14.720 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.984 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 90eda4b9-44d7-4c0d-a842-2439f42645af 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.985 [2024-11-20 07:16:12.138393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:14.985 NewBaseBdev 00:19:14.985 [2024-11-20 07:16:12.138976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:14.985 [2024-11-20 07:16:12.139007] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:14.985 [2024-11-20 07:16:12.139356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.985 [2024-11-20 07:16:12.144429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:14.985 [2024-11-20 07:16:12.144617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:19:14.985 [2024-11-20 07:16:12.145002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.985 [ 00:19:14.985 { 00:19:14.985 "name": "NewBaseBdev", 00:19:14.985 "aliases": [ 00:19:14.985 "90eda4b9-44d7-4c0d-a842-2439f42645af" 00:19:14.985 ], 00:19:14.985 "product_name": "Malloc disk", 00:19:14.985 "block_size": 512, 00:19:14.985 "num_blocks": 65536, 00:19:14.985 "uuid": "90eda4b9-44d7-4c0d-a842-2439f42645af", 00:19:14.985 "assigned_rate_limits": { 00:19:14.985 "rw_ios_per_sec": 0, 00:19:14.985 "rw_mbytes_per_sec": 0, 00:19:14.985 "r_mbytes_per_sec": 0, 00:19:14.985 "w_mbytes_per_sec": 0 00:19:14.985 }, 00:19:14.985 "claimed": true, 00:19:14.985 "claim_type": "exclusive_write", 00:19:14.985 "zoned": false, 00:19:14.985 "supported_io_types": { 00:19:14.985 "read": true, 00:19:14.985 "write": true, 00:19:14.985 "unmap": true, 00:19:14.985 "flush": true, 00:19:14.985 "reset": true, 00:19:14.985 "nvme_admin": false, 00:19:14.985 "nvme_io": false, 00:19:14.985 "nvme_io_md": false, 00:19:14.985 "write_zeroes": true, 00:19:14.985 "zcopy": true, 00:19:14.985 "get_zone_info": false, 00:19:14.985 "zone_management": false, 00:19:14.985 "zone_append": false, 00:19:14.985 "compare": false, 00:19:14.985 "compare_and_write": false, 00:19:14.985 "abort": true, 00:19:14.985 "seek_hole": false, 00:19:14.985 "seek_data": false, 00:19:14.985 "copy": true, 00:19:14.985 "nvme_iov_md": false 00:19:14.985 }, 00:19:14.985 "memory_domains": [ 00:19:14.985 { 00:19:14.985 "dma_device_id": "system", 00:19:14.985 "dma_device_type": 1 00:19:14.985 }, 00:19:14.985 { 00:19:14.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.985 "dma_device_type": 2 00:19:14.985 } 00:19:14.985 ], 00:19:14.985 "driver_specific": {} 00:19:14.985 } 00:19:14.985 ] 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.985 "name": "Existed_Raid", 00:19:14.985 "uuid": "c959680a-ec75-483c-8cf5-9792c07c9af3", 00:19:14.985 "strip_size_kb": 64, 00:19:14.985 "state": "online", 00:19:14.985 "raid_level": "raid5f", 00:19:14.985 "superblock": true, 00:19:14.985 "num_base_bdevs": 3, 00:19:14.985 "num_base_bdevs_discovered": 3, 00:19:14.985 "num_base_bdevs_operational": 3, 00:19:14.985 "base_bdevs_list": [ 00:19:14.985 { 00:19:14.985 "name": "NewBaseBdev", 00:19:14.985 "uuid": "90eda4b9-44d7-4c0d-a842-2439f42645af", 00:19:14.985 "is_configured": true, 00:19:14.985 "data_offset": 2048, 00:19:14.985 "data_size": 63488 00:19:14.985 }, 00:19:14.985 { 00:19:14.985 "name": "BaseBdev2", 00:19:14.985 "uuid": "f966fb3d-e631-4989-837d-8a12bef950a2", 00:19:14.985 "is_configured": true, 00:19:14.985 "data_offset": 2048, 00:19:14.985 "data_size": 63488 00:19:14.985 }, 00:19:14.985 { 00:19:14.985 "name": "BaseBdev3", 00:19:14.985 "uuid": "4b56c93f-b713-4b53-8122-03c35a8bfb6a", 00:19:14.985 "is_configured": true, 00:19:14.985 "data_offset": 2048, 00:19:14.985 "data_size": 63488 00:19:14.985 } 00:19:14.985 ] 00:19:14.985 }' 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.985 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.570 [2024-11-20 07:16:12.735042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:15.570 "name": "Existed_Raid", 00:19:15.570 "aliases": [ 00:19:15.570 "c959680a-ec75-483c-8cf5-9792c07c9af3" 00:19:15.570 ], 00:19:15.570 "product_name": "Raid Volume", 00:19:15.570 "block_size": 512, 00:19:15.570 "num_blocks": 126976, 00:19:15.570 "uuid": "c959680a-ec75-483c-8cf5-9792c07c9af3", 00:19:15.570 "assigned_rate_limits": { 00:19:15.570 "rw_ios_per_sec": 0, 00:19:15.570 "rw_mbytes_per_sec": 0, 00:19:15.570 "r_mbytes_per_sec": 0, 00:19:15.570 "w_mbytes_per_sec": 0 00:19:15.570 }, 00:19:15.570 "claimed": false, 00:19:15.570 "zoned": false, 00:19:15.570 "supported_io_types": { 00:19:15.570 "read": true, 00:19:15.570 "write": true, 00:19:15.570 "unmap": false, 00:19:15.570 "flush": false, 00:19:15.570 "reset": true, 00:19:15.570 "nvme_admin": false, 00:19:15.570 "nvme_io": false, 00:19:15.570 "nvme_io_md": false, 00:19:15.570 "write_zeroes": true, 00:19:15.570 "zcopy": false, 00:19:15.570 "get_zone_info": false, 00:19:15.570 "zone_management": false, 00:19:15.570 "zone_append": false, 00:19:15.570 "compare": false, 00:19:15.570 "compare_and_write": false, 00:19:15.570 "abort": false, 00:19:15.570 "seek_hole": false, 00:19:15.570 "seek_data": false, 00:19:15.570 "copy": false, 00:19:15.570 "nvme_iov_md": false 00:19:15.570 }, 00:19:15.570 "driver_specific": { 00:19:15.570 "raid": { 00:19:15.570 "uuid": "c959680a-ec75-483c-8cf5-9792c07c9af3", 00:19:15.570 "strip_size_kb": 64, 00:19:15.570 "state": "online", 00:19:15.570 "raid_level": "raid5f", 00:19:15.570 "superblock": true, 00:19:15.570 "num_base_bdevs": 3, 00:19:15.570 "num_base_bdevs_discovered": 3, 00:19:15.570 "num_base_bdevs_operational": 3, 00:19:15.570 "base_bdevs_list": [ 00:19:15.570 { 00:19:15.570 "name": "NewBaseBdev", 00:19:15.570 "uuid": "90eda4b9-44d7-4c0d-a842-2439f42645af", 00:19:15.570 "is_configured": true, 00:19:15.570 "data_offset": 2048, 00:19:15.570 "data_size": 63488 00:19:15.570 }, 00:19:15.570 { 00:19:15.570 "name": "BaseBdev2", 00:19:15.570 "uuid": "f966fb3d-e631-4989-837d-8a12bef950a2", 00:19:15.570 "is_configured": true, 00:19:15.570 "data_offset": 2048, 00:19:15.570 "data_size": 63488 00:19:15.570 }, 00:19:15.570 { 00:19:15.570 "name": "BaseBdev3", 00:19:15.570 "uuid": "4b56c93f-b713-4b53-8122-03c35a8bfb6a", 00:19:15.570 "is_configured": true, 00:19:15.570 "data_offset": 2048, 00:19:15.570 "data_size": 63488 00:19:15.570 } 00:19:15.570 ] 00:19:15.570 } 00:19:15.570 } 00:19:15.570 }' 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:15.570 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:19:15.570 BaseBdev2 00:19:15.570 BaseBdev3' 00:19:15.571 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.571 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:15.571 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.571 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:19:15.571 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.571 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.571 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.842 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.842 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:15.842 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:15.842 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.842 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.842 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:15.842 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.842 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.843 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.843 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:15.843 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:15.843 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:15.843 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:15.843 07:16:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:19:15.843 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.843 07:16:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.843 [2024-11-20 07:16:13.058825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:15.843 [2024-11-20 07:16:13.059041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.843 [2024-11-20 07:16:13.059246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.843 [2024-11-20 07:16:13.059711] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.843 [2024-11-20 07:16:13.059746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80775 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80775 ']' 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80775 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80775 00:19:15.843 killing process with pid 80775 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80775' 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80775 00:19:15.843 [2024-11-20 07:16:13.095261] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:15.843 07:16:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80775 00:19:16.116 [2024-11-20 07:16:13.356602] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:17.088 07:16:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:19:17.088 00:19:17.088 real 0m12.086s 00:19:17.088 user 0m20.126s 00:19:17.088 sys 0m1.655s 00:19:17.088 07:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.088 ************************************ 00:19:17.088 END TEST raid5f_state_function_test_sb 00:19:17.088 ************************************ 00:19:17.088 07:16:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.347 07:16:14 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:19:17.347 07:16:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:17.347 07:16:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.347 07:16:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:17.347 ************************************ 00:19:17.347 START TEST raid5f_superblock_test 00:19:17.347 ************************************ 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:19:17.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81407 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81407 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81407 ']' 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.347 07:16:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.347 [2024-11-20 07:16:14.541482] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:19:17.347 [2024-11-20 07:16:14.541939] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81407 ] 00:19:17.607 [2024-11-20 07:16:14.727349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.607 [2024-11-20 07:16:14.861244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.865 [2024-11-20 07:16:15.064741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.865 [2024-11-20 07:16:15.065095] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.433 malloc1 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.433 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.433 [2024-11-20 07:16:15.591637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:18.433 [2024-11-20 07:16:15.591892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.433 [2024-11-20 07:16:15.591969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:18.433 [2024-11-20 07:16:15.592268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.434 [2024-11-20 07:16:15.595135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.434 [2024-11-20 07:16:15.595181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:18.434 pt1 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.434 malloc2 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.434 [2024-11-20 07:16:15.647115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:18.434 [2024-11-20 07:16:15.647336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.434 [2024-11-20 07:16:15.647376] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:18.434 [2024-11-20 07:16:15.647393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.434 [2024-11-20 07:16:15.650270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.434 [2024-11-20 07:16:15.650310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:18.434 pt2 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.434 malloc3 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.434 [2024-11-20 07:16:15.718712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:18.434 [2024-11-20 07:16:15.718933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.434 [2024-11-20 07:16:15.718978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:18.434 [2024-11-20 07:16:15.718995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.434 [2024-11-20 07:16:15.722125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.434 [2024-11-20 07:16:15.722168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:18.434 pt3 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.434 [2024-11-20 07:16:15.730878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:18.434 [2024-11-20 07:16:15.733731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:18.434 [2024-11-20 07:16:15.733974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:18.434 [2024-11-20 07:16:15.734260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:18.434 [2024-11-20 07:16:15.734391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:18.434 [2024-11-20 07:16:15.734764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:18.434 [2024-11-20 07:16:15.740181] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:18.434 [2024-11-20 07:16:15.740332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:18.434 [2024-11-20 07:16:15.740610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.434 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.694 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.694 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.694 "name": "raid_bdev1", 00:19:18.694 "uuid": "2ec93bd7-3ed7-45a8-84a9-5dc73651f192", 00:19:18.694 "strip_size_kb": 64, 00:19:18.694 "state": "online", 00:19:18.694 "raid_level": "raid5f", 00:19:18.694 "superblock": true, 00:19:18.694 "num_base_bdevs": 3, 00:19:18.694 "num_base_bdevs_discovered": 3, 00:19:18.694 "num_base_bdevs_operational": 3, 00:19:18.694 "base_bdevs_list": [ 00:19:18.694 { 00:19:18.694 "name": "pt1", 00:19:18.694 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:18.694 "is_configured": true, 00:19:18.694 "data_offset": 2048, 00:19:18.694 "data_size": 63488 00:19:18.694 }, 00:19:18.694 { 00:19:18.694 "name": "pt2", 00:19:18.694 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:18.694 "is_configured": true, 00:19:18.694 "data_offset": 2048, 00:19:18.694 "data_size": 63488 00:19:18.694 }, 00:19:18.694 { 00:19:18.694 "name": "pt3", 00:19:18.694 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:18.694 "is_configured": true, 00:19:18.694 "data_offset": 2048, 00:19:18.694 "data_size": 63488 00:19:18.694 } 00:19:18.694 ] 00:19:18.694 }' 00:19:18.694 07:16:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.694 07:16:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.954 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:18.954 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:18.954 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:18.954 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:18.954 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:18.954 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:18.954 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:18.954 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.954 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:18.954 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:18.954 [2024-11-20 07:16:16.246816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:18.954 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:19.213 "name": "raid_bdev1", 00:19:19.213 "aliases": [ 00:19:19.213 "2ec93bd7-3ed7-45a8-84a9-5dc73651f192" 00:19:19.213 ], 00:19:19.213 "product_name": "Raid Volume", 00:19:19.213 "block_size": 512, 00:19:19.213 "num_blocks": 126976, 00:19:19.213 "uuid": "2ec93bd7-3ed7-45a8-84a9-5dc73651f192", 00:19:19.213 "assigned_rate_limits": { 00:19:19.213 "rw_ios_per_sec": 0, 00:19:19.213 "rw_mbytes_per_sec": 0, 00:19:19.213 "r_mbytes_per_sec": 0, 00:19:19.213 "w_mbytes_per_sec": 0 00:19:19.213 }, 00:19:19.213 "claimed": false, 00:19:19.213 "zoned": false, 00:19:19.213 "supported_io_types": { 00:19:19.213 "read": true, 00:19:19.213 "write": true, 00:19:19.213 "unmap": false, 00:19:19.213 "flush": false, 00:19:19.213 "reset": true, 00:19:19.213 "nvme_admin": false, 00:19:19.213 "nvme_io": false, 00:19:19.213 "nvme_io_md": false, 00:19:19.213 "write_zeroes": true, 00:19:19.213 "zcopy": false, 00:19:19.213 "get_zone_info": false, 00:19:19.213 "zone_management": false, 00:19:19.213 "zone_append": false, 00:19:19.213 "compare": false, 00:19:19.213 "compare_and_write": false, 00:19:19.213 "abort": false, 00:19:19.213 "seek_hole": false, 00:19:19.213 "seek_data": false, 00:19:19.213 "copy": false, 00:19:19.213 "nvme_iov_md": false 00:19:19.213 }, 00:19:19.213 "driver_specific": { 00:19:19.213 "raid": { 00:19:19.213 "uuid": "2ec93bd7-3ed7-45a8-84a9-5dc73651f192", 00:19:19.213 "strip_size_kb": 64, 00:19:19.213 "state": "online", 00:19:19.213 "raid_level": "raid5f", 00:19:19.213 "superblock": true, 00:19:19.213 "num_base_bdevs": 3, 00:19:19.213 "num_base_bdevs_discovered": 3, 00:19:19.213 "num_base_bdevs_operational": 3, 00:19:19.213 "base_bdevs_list": [ 00:19:19.213 { 00:19:19.213 "name": "pt1", 00:19:19.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:19.213 "is_configured": true, 00:19:19.213 "data_offset": 2048, 00:19:19.213 "data_size": 63488 00:19:19.213 }, 00:19:19.213 { 00:19:19.213 "name": "pt2", 00:19:19.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.213 "is_configured": true, 00:19:19.213 "data_offset": 2048, 00:19:19.213 "data_size": 63488 00:19:19.213 }, 00:19:19.213 { 00:19:19.213 "name": "pt3", 00:19:19.213 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:19.213 "is_configured": true, 00:19:19.213 "data_offset": 2048, 00:19:19.213 "data_size": 63488 00:19:19.213 } 00:19:19.213 ] 00:19:19.213 } 00:19:19.213 } 00:19:19.213 }' 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:19.213 pt2 00:19:19.213 pt3' 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.213 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.473 [2024-11-20 07:16:16.590918] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2ec93bd7-3ed7-45a8-84a9-5dc73651f192 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2ec93bd7-3ed7-45a8-84a9-5dc73651f192 ']' 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.473 [2024-11-20 07:16:16.638670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.473 [2024-11-20 07:16:16.638874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:19.473 [2024-11-20 07:16:16.639093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.473 [2024-11-20 07:16:16.639210] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.473 [2024-11-20 07:16:16.639228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.473 [2024-11-20 07:16:16.782792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:19.473 [2024-11-20 07:16:16.785459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:19.473 [2024-11-20 07:16:16.785525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:19.473 [2024-11-20 07:16:16.785627] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:19.473 [2024-11-20 07:16:16.785696] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:19.473 [2024-11-20 07:16:16.785728] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:19.473 [2024-11-20 07:16:16.785754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.473 [2024-11-20 07:16:16.785767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:19.473 request: 00:19:19.473 { 00:19:19.473 "name": "raid_bdev1", 00:19:19.473 "raid_level": "raid5f", 00:19:19.473 "base_bdevs": [ 00:19:19.473 "malloc1", 00:19:19.473 "malloc2", 00:19:19.473 "malloc3" 00:19:19.473 ], 00:19:19.473 "strip_size_kb": 64, 00:19:19.473 "superblock": false, 00:19:19.473 "method": "bdev_raid_create", 00:19:19.473 "req_id": 1 00:19:19.473 } 00:19:19.473 Got JSON-RPC error response 00:19:19.473 response: 00:19:19.473 { 00:19:19.473 "code": -17, 00:19:19.473 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:19.473 } 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.473 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.733 [2024-11-20 07:16:16.846718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:19.733 [2024-11-20 07:16:16.846940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.733 [2024-11-20 07:16:16.847026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:19.733 [2024-11-20 07:16:16.847235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.733 [2024-11-20 07:16:16.850199] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.733 [2024-11-20 07:16:16.850408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:19.733 [2024-11-20 07:16:16.850612] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:19.733 [2024-11-20 07:16:16.850795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:19.733 pt1 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.733 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.733 "name": "raid_bdev1", 00:19:19.733 "uuid": "2ec93bd7-3ed7-45a8-84a9-5dc73651f192", 00:19:19.733 "strip_size_kb": 64, 00:19:19.733 "state": "configuring", 00:19:19.733 "raid_level": "raid5f", 00:19:19.733 "superblock": true, 00:19:19.733 "num_base_bdevs": 3, 00:19:19.733 "num_base_bdevs_discovered": 1, 00:19:19.733 "num_base_bdevs_operational": 3, 00:19:19.733 "base_bdevs_list": [ 00:19:19.733 { 00:19:19.733 "name": "pt1", 00:19:19.733 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:19.733 "is_configured": true, 00:19:19.733 "data_offset": 2048, 00:19:19.733 "data_size": 63488 00:19:19.733 }, 00:19:19.733 { 00:19:19.733 "name": null, 00:19:19.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.733 "is_configured": false, 00:19:19.733 "data_offset": 2048, 00:19:19.733 "data_size": 63488 00:19:19.733 }, 00:19:19.733 { 00:19:19.733 "name": null, 00:19:19.733 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:19.733 "is_configured": false, 00:19:19.733 "data_offset": 2048, 00:19:19.733 "data_size": 63488 00:19:19.733 } 00:19:19.733 ] 00:19:19.733 }' 00:19:19.734 07:16:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.734 07:16:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.302 [2024-11-20 07:16:17.387422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:20.302 [2024-11-20 07:16:17.387546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.302 [2024-11-20 07:16:17.387616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:20.302 [2024-11-20 07:16:17.387646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.302 [2024-11-20 07:16:17.388577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.302 [2024-11-20 07:16:17.388664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:20.302 [2024-11-20 07:16:17.388850] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:20.302 [2024-11-20 07:16:17.388925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:20.302 pt2 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.302 [2024-11-20 07:16:17.395330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.302 "name": "raid_bdev1", 00:19:20.302 "uuid": "2ec93bd7-3ed7-45a8-84a9-5dc73651f192", 00:19:20.302 "strip_size_kb": 64, 00:19:20.302 "state": "configuring", 00:19:20.302 "raid_level": "raid5f", 00:19:20.302 "superblock": true, 00:19:20.302 "num_base_bdevs": 3, 00:19:20.302 "num_base_bdevs_discovered": 1, 00:19:20.302 "num_base_bdevs_operational": 3, 00:19:20.302 "base_bdevs_list": [ 00:19:20.302 { 00:19:20.302 "name": "pt1", 00:19:20.302 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:20.302 "is_configured": true, 00:19:20.302 "data_offset": 2048, 00:19:20.302 "data_size": 63488 00:19:20.302 }, 00:19:20.302 { 00:19:20.302 "name": null, 00:19:20.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.302 "is_configured": false, 00:19:20.302 "data_offset": 0, 00:19:20.302 "data_size": 63488 00:19:20.302 }, 00:19:20.302 { 00:19:20.302 "name": null, 00:19:20.302 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:20.302 "is_configured": false, 00:19:20.302 "data_offset": 2048, 00:19:20.302 "data_size": 63488 00:19:20.302 } 00:19:20.302 ] 00:19:20.302 }' 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.302 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.870 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:20.870 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:20.870 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:20.870 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.870 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.870 [2024-11-20 07:16:17.911479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:20.870 [2024-11-20 07:16:17.911593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.870 [2024-11-20 07:16:17.911622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:20.870 [2024-11-20 07:16:17.911641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.871 [2024-11-20 07:16:17.912277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.871 [2024-11-20 07:16:17.912309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:20.871 [2024-11-20 07:16:17.912407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:20.871 [2024-11-20 07:16:17.912464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:20.871 pt2 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.871 [2024-11-20 07:16:17.919435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:20.871 [2024-11-20 07:16:17.919669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.871 [2024-11-20 07:16:17.919736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:20.871 [2024-11-20 07:16:17.919930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.871 [2024-11-20 07:16:17.920445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.871 [2024-11-20 07:16:17.920495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:20.871 [2024-11-20 07:16:17.920601] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:20.871 [2024-11-20 07:16:17.920632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:20.871 [2024-11-20 07:16:17.920771] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:20.871 [2024-11-20 07:16:17.920808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:20.871 [2024-11-20 07:16:17.921134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:20.871 [2024-11-20 07:16:17.926343] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:20.871 [2024-11-20 07:16:17.926510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:20.871 [2024-11-20 07:16:17.926860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.871 pt3 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.871 "name": "raid_bdev1", 00:19:20.871 "uuid": "2ec93bd7-3ed7-45a8-84a9-5dc73651f192", 00:19:20.871 "strip_size_kb": 64, 00:19:20.871 "state": "online", 00:19:20.871 "raid_level": "raid5f", 00:19:20.871 "superblock": true, 00:19:20.871 "num_base_bdevs": 3, 00:19:20.871 "num_base_bdevs_discovered": 3, 00:19:20.871 "num_base_bdevs_operational": 3, 00:19:20.871 "base_bdevs_list": [ 00:19:20.871 { 00:19:20.871 "name": "pt1", 00:19:20.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:20.871 "is_configured": true, 00:19:20.871 "data_offset": 2048, 00:19:20.871 "data_size": 63488 00:19:20.871 }, 00:19:20.871 { 00:19:20.871 "name": "pt2", 00:19:20.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.871 "is_configured": true, 00:19:20.871 "data_offset": 2048, 00:19:20.871 "data_size": 63488 00:19:20.871 }, 00:19:20.871 { 00:19:20.871 "name": "pt3", 00:19:20.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:20.871 "is_configured": true, 00:19:20.871 "data_offset": 2048, 00:19:20.871 "data_size": 63488 00:19:20.871 } 00:19:20.871 ] 00:19:20.871 }' 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.871 07:16:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:21.439 [2024-11-20 07:16:18.469261] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:21.439 "name": "raid_bdev1", 00:19:21.439 "aliases": [ 00:19:21.439 "2ec93bd7-3ed7-45a8-84a9-5dc73651f192" 00:19:21.439 ], 00:19:21.439 "product_name": "Raid Volume", 00:19:21.439 "block_size": 512, 00:19:21.439 "num_blocks": 126976, 00:19:21.439 "uuid": "2ec93bd7-3ed7-45a8-84a9-5dc73651f192", 00:19:21.439 "assigned_rate_limits": { 00:19:21.439 "rw_ios_per_sec": 0, 00:19:21.439 "rw_mbytes_per_sec": 0, 00:19:21.439 "r_mbytes_per_sec": 0, 00:19:21.439 "w_mbytes_per_sec": 0 00:19:21.439 }, 00:19:21.439 "claimed": false, 00:19:21.439 "zoned": false, 00:19:21.439 "supported_io_types": { 00:19:21.439 "read": true, 00:19:21.439 "write": true, 00:19:21.439 "unmap": false, 00:19:21.439 "flush": false, 00:19:21.439 "reset": true, 00:19:21.439 "nvme_admin": false, 00:19:21.439 "nvme_io": false, 00:19:21.439 "nvme_io_md": false, 00:19:21.439 "write_zeroes": true, 00:19:21.439 "zcopy": false, 00:19:21.439 "get_zone_info": false, 00:19:21.439 "zone_management": false, 00:19:21.439 "zone_append": false, 00:19:21.439 "compare": false, 00:19:21.439 "compare_and_write": false, 00:19:21.439 "abort": false, 00:19:21.439 "seek_hole": false, 00:19:21.439 "seek_data": false, 00:19:21.439 "copy": false, 00:19:21.439 "nvme_iov_md": false 00:19:21.439 }, 00:19:21.439 "driver_specific": { 00:19:21.439 "raid": { 00:19:21.439 "uuid": "2ec93bd7-3ed7-45a8-84a9-5dc73651f192", 00:19:21.439 "strip_size_kb": 64, 00:19:21.439 "state": "online", 00:19:21.439 "raid_level": "raid5f", 00:19:21.439 "superblock": true, 00:19:21.439 "num_base_bdevs": 3, 00:19:21.439 "num_base_bdevs_discovered": 3, 00:19:21.439 "num_base_bdevs_operational": 3, 00:19:21.439 "base_bdevs_list": [ 00:19:21.439 { 00:19:21.439 "name": "pt1", 00:19:21.439 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.439 "is_configured": true, 00:19:21.439 "data_offset": 2048, 00:19:21.439 "data_size": 63488 00:19:21.439 }, 00:19:21.439 { 00:19:21.439 "name": "pt2", 00:19:21.439 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.439 "is_configured": true, 00:19:21.439 "data_offset": 2048, 00:19:21.439 "data_size": 63488 00:19:21.439 }, 00:19:21.439 { 00:19:21.439 "name": "pt3", 00:19:21.439 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:21.439 "is_configured": true, 00:19:21.439 "data_offset": 2048, 00:19:21.439 "data_size": 63488 00:19:21.439 } 00:19:21.439 ] 00:19:21.439 } 00:19:21.439 } 00:19:21.439 }' 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:21.439 pt2 00:19:21.439 pt3' 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:21.439 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.699 [2024-11-20 07:16:18.781342] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2ec93bd7-3ed7-45a8-84a9-5dc73651f192 '!=' 2ec93bd7-3ed7-45a8-84a9-5dc73651f192 ']' 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.699 [2024-11-20 07:16:18.833169] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.699 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.700 "name": "raid_bdev1", 00:19:21.700 "uuid": "2ec93bd7-3ed7-45a8-84a9-5dc73651f192", 00:19:21.700 "strip_size_kb": 64, 00:19:21.700 "state": "online", 00:19:21.700 "raid_level": "raid5f", 00:19:21.700 "superblock": true, 00:19:21.700 "num_base_bdevs": 3, 00:19:21.700 "num_base_bdevs_discovered": 2, 00:19:21.700 "num_base_bdevs_operational": 2, 00:19:21.700 "base_bdevs_list": [ 00:19:21.700 { 00:19:21.700 "name": null, 00:19:21.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:21.700 "is_configured": false, 00:19:21.700 "data_offset": 0, 00:19:21.700 "data_size": 63488 00:19:21.700 }, 00:19:21.700 { 00:19:21.700 "name": "pt2", 00:19:21.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.700 "is_configured": true, 00:19:21.700 "data_offset": 2048, 00:19:21.700 "data_size": 63488 00:19:21.700 }, 00:19:21.700 { 00:19:21.700 "name": "pt3", 00:19:21.700 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:21.700 "is_configured": true, 00:19:21.700 "data_offset": 2048, 00:19:21.700 "data_size": 63488 00:19:21.700 } 00:19:21.700 ] 00:19:21.700 }' 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.700 07:16:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.268 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:22.268 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.268 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.268 [2024-11-20 07:16:19.373272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.268 [2024-11-20 07:16:19.373495] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:22.268 [2024-11-20 07:16:19.373621] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:22.268 [2024-11-20 07:16:19.373718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:22.269 [2024-11-20 07:16:19.373742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.269 [2024-11-20 07:16:19.457220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:22.269 [2024-11-20 07:16:19.457414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.269 [2024-11-20 07:16:19.457499] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:22.269 [2024-11-20 07:16:19.457665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.269 [2024-11-20 07:16:19.460554] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.269 [2024-11-20 07:16:19.460606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:22.269 [2024-11-20 07:16:19.460714] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:22.269 [2024-11-20 07:16:19.460775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:22.269 pt2 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.269 "name": "raid_bdev1", 00:19:22.269 "uuid": "2ec93bd7-3ed7-45a8-84a9-5dc73651f192", 00:19:22.269 "strip_size_kb": 64, 00:19:22.269 "state": "configuring", 00:19:22.269 "raid_level": "raid5f", 00:19:22.269 "superblock": true, 00:19:22.269 "num_base_bdevs": 3, 00:19:22.269 "num_base_bdevs_discovered": 1, 00:19:22.269 "num_base_bdevs_operational": 2, 00:19:22.269 "base_bdevs_list": [ 00:19:22.269 { 00:19:22.269 "name": null, 00:19:22.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.269 "is_configured": false, 00:19:22.269 "data_offset": 2048, 00:19:22.269 "data_size": 63488 00:19:22.269 }, 00:19:22.269 { 00:19:22.269 "name": "pt2", 00:19:22.269 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.269 "is_configured": true, 00:19:22.269 "data_offset": 2048, 00:19:22.269 "data_size": 63488 00:19:22.269 }, 00:19:22.269 { 00:19:22.269 "name": null, 00:19:22.269 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:22.269 "is_configured": false, 00:19:22.269 "data_offset": 2048, 00:19:22.269 "data_size": 63488 00:19:22.269 } 00:19:22.269 ] 00:19:22.269 }' 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.269 07:16:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.835 [2024-11-20 07:16:20.009461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:22.835 [2024-11-20 07:16:20.009545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:22.835 [2024-11-20 07:16:20.009582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:22.835 [2024-11-20 07:16:20.009601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:22.835 [2024-11-20 07:16:20.010215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:22.835 [2024-11-20 07:16:20.010256] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:22.835 [2024-11-20 07:16:20.010355] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:22.835 [2024-11-20 07:16:20.010409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:22.835 [2024-11-20 07:16:20.010552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:22.835 [2024-11-20 07:16:20.010573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:22.835 [2024-11-20 07:16:20.010903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:22.835 [2024-11-20 07:16:20.015834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:22.835 [2024-11-20 07:16:20.015864] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:22.835 pt3 00:19:22.835 [2024-11-20 07:16:20.016272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:22.835 "name": "raid_bdev1", 00:19:22.835 "uuid": "2ec93bd7-3ed7-45a8-84a9-5dc73651f192", 00:19:22.835 "strip_size_kb": 64, 00:19:22.835 "state": "online", 00:19:22.835 "raid_level": "raid5f", 00:19:22.835 "superblock": true, 00:19:22.835 "num_base_bdevs": 3, 00:19:22.835 "num_base_bdevs_discovered": 2, 00:19:22.835 "num_base_bdevs_operational": 2, 00:19:22.835 "base_bdevs_list": [ 00:19:22.835 { 00:19:22.835 "name": null, 00:19:22.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:22.835 "is_configured": false, 00:19:22.835 "data_offset": 2048, 00:19:22.835 "data_size": 63488 00:19:22.835 }, 00:19:22.835 { 00:19:22.835 "name": "pt2", 00:19:22.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.835 "is_configured": true, 00:19:22.835 "data_offset": 2048, 00:19:22.835 "data_size": 63488 00:19:22.835 }, 00:19:22.835 { 00:19:22.835 "name": "pt3", 00:19:22.835 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:22.835 "is_configured": true, 00:19:22.835 "data_offset": 2048, 00:19:22.835 "data_size": 63488 00:19:22.835 } 00:19:22.835 ] 00:19:22.835 }' 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:22.835 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.405 [2024-11-20 07:16:20.570230] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.405 [2024-11-20 07:16:20.570396] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:23.405 [2024-11-20 07:16:20.570635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:23.405 [2024-11-20 07:16:20.570824] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:23.405 [2024-11-20 07:16:20.571040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.405 [2024-11-20 07:16:20.638286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:23.405 [2024-11-20 07:16:20.638490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.405 [2024-11-20 07:16:20.638630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:23.405 [2024-11-20 07:16:20.638657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.405 [2024-11-20 07:16:20.641603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.405 pt1 00:19:23.405 [2024-11-20 07:16:20.641760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:23.405 [2024-11-20 07:16:20.641903] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:23.405 [2024-11-20 07:16:20.641964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:23.405 [2024-11-20 07:16:20.642130] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:23.405 [2024-11-20 07:16:20.642147] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.405 [2024-11-20 07:16:20.642169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:23.405 [2024-11-20 07:16:20.642253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.405 "name": "raid_bdev1", 00:19:23.405 "uuid": "2ec93bd7-3ed7-45a8-84a9-5dc73651f192", 00:19:23.405 "strip_size_kb": 64, 00:19:23.405 "state": "configuring", 00:19:23.405 "raid_level": "raid5f", 00:19:23.405 "superblock": true, 00:19:23.405 "num_base_bdevs": 3, 00:19:23.405 "num_base_bdevs_discovered": 1, 00:19:23.405 "num_base_bdevs_operational": 2, 00:19:23.405 "base_bdevs_list": [ 00:19:23.405 { 00:19:23.405 "name": null, 00:19:23.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.405 "is_configured": false, 00:19:23.405 "data_offset": 2048, 00:19:23.405 "data_size": 63488 00:19:23.405 }, 00:19:23.405 { 00:19:23.405 "name": "pt2", 00:19:23.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:23.405 "is_configured": true, 00:19:23.405 "data_offset": 2048, 00:19:23.405 "data_size": 63488 00:19:23.405 }, 00:19:23.405 { 00:19:23.405 "name": null, 00:19:23.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:23.405 "is_configured": false, 00:19:23.405 "data_offset": 2048, 00:19:23.405 "data_size": 63488 00:19:23.405 } 00:19:23.405 ] 00:19:23.405 }' 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.405 07:16:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.974 [2024-11-20 07:16:21.222615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:23.974 [2024-11-20 07:16:21.222710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.974 [2024-11-20 07:16:21.222744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:23.974 [2024-11-20 07:16:21.222759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.974 [2024-11-20 07:16:21.223416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.974 [2024-11-20 07:16:21.223449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:23.974 [2024-11-20 07:16:21.223554] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:23.974 [2024-11-20 07:16:21.223585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:23.974 [2024-11-20 07:16:21.223734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:23.974 [2024-11-20 07:16:21.223756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:23.974 [2024-11-20 07:16:21.224081] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:23.974 pt3 00:19:23.974 [2024-11-20 07:16:21.229118] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:23.974 [2024-11-20 07:16:21.229152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:23.974 [2024-11-20 07:16:21.229529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.974 "name": "raid_bdev1", 00:19:23.974 "uuid": "2ec93bd7-3ed7-45a8-84a9-5dc73651f192", 00:19:23.974 "strip_size_kb": 64, 00:19:23.974 "state": "online", 00:19:23.974 "raid_level": "raid5f", 00:19:23.974 "superblock": true, 00:19:23.974 "num_base_bdevs": 3, 00:19:23.974 "num_base_bdevs_discovered": 2, 00:19:23.974 "num_base_bdevs_operational": 2, 00:19:23.974 "base_bdevs_list": [ 00:19:23.974 { 00:19:23.974 "name": null, 00:19:23.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.974 "is_configured": false, 00:19:23.974 "data_offset": 2048, 00:19:23.974 "data_size": 63488 00:19:23.974 }, 00:19:23.974 { 00:19:23.974 "name": "pt2", 00:19:23.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:23.974 "is_configured": true, 00:19:23.974 "data_offset": 2048, 00:19:23.974 "data_size": 63488 00:19:23.974 }, 00:19:23.974 { 00:19:23.974 "name": "pt3", 00:19:23.974 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:23.974 "is_configured": true, 00:19:23.974 "data_offset": 2048, 00:19:23.974 "data_size": 63488 00:19:23.974 } 00:19:23.974 ] 00:19:23.974 }' 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.974 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.566 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:24.566 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:24.566 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.566 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.566 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.566 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:24.566 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:24.566 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:24.566 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.566 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.566 [2024-11-20 07:16:21.851952] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:24.566 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.826 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2ec93bd7-3ed7-45a8-84a9-5dc73651f192 '!=' 2ec93bd7-3ed7-45a8-84a9-5dc73651f192 ']' 00:19:24.826 07:16:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81407 00:19:24.826 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81407 ']' 00:19:24.826 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81407 00:19:24.826 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:19:24.826 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.826 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81407 00:19:24.826 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.826 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.826 killing process with pid 81407 00:19:24.826 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81407' 00:19:24.826 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81407 00:19:24.826 [2024-11-20 07:16:21.929576] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:24.826 07:16:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81407 00:19:24.826 [2024-11-20 07:16:21.929690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:24.826 [2024-11-20 07:16:21.929772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:24.826 [2024-11-20 07:16:21.929809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:25.084 [2024-11-20 07:16:22.203406] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:26.020 07:16:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:19:26.020 00:19:26.020 real 0m8.800s 00:19:26.020 user 0m14.466s 00:19:26.020 sys 0m1.215s 00:19:26.020 07:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.020 07:16:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.020 ************************************ 00:19:26.020 END TEST raid5f_superblock_test 00:19:26.020 ************************************ 00:19:26.020 07:16:23 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:19:26.020 07:16:23 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:19:26.020 07:16:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:26.020 07:16:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.020 07:16:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:26.020 ************************************ 00:19:26.020 START TEST raid5f_rebuild_test 00:19:26.020 ************************************ 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81862 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81862 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81862 ']' 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.020 07:16:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.277 [2024-11-20 07:16:23.396940] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:19:26.277 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:26.277 Zero copy mechanism will not be used. 00:19:26.277 [2024-11-20 07:16:23.397340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81862 ] 00:19:26.277 [2024-11-20 07:16:23.583179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.535 [2024-11-20 07:16:23.713947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.793 [2024-11-20 07:16:23.918437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:26.793 [2024-11-20 07:16:23.918521] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.360 BaseBdev1_malloc 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.360 [2024-11-20 07:16:24.518008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:27.360 [2024-11-20 07:16:24.518442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.360 [2024-11-20 07:16:24.518509] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:27.360 [2024-11-20 07:16:24.518534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.360 [2024-11-20 07:16:24.522143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.360 [2024-11-20 07:16:24.522270] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:27.360 BaseBdev1 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.360 BaseBdev2_malloc 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.360 [2024-11-20 07:16:24.581409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:27.360 [2024-11-20 07:16:24.581882] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.360 [2024-11-20 07:16:24.581939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:27.360 [2024-11-20 07:16:24.581970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.360 [2024-11-20 07:16:24.585628] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.360 [2024-11-20 07:16:24.585769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:27.360 BaseBdev2 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.360 BaseBdev3_malloc 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.360 [2024-11-20 07:16:24.657547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:27.360 [2024-11-20 07:16:24.657947] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.360 [2024-11-20 07:16:24.658139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:27.360 [2024-11-20 07:16:24.658176] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.360 [2024-11-20 07:16:24.661747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.360 [2024-11-20 07:16:24.661854] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:27.360 BaseBdev3 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.360 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.619 spare_malloc 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.619 spare_delay 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.619 [2024-11-20 07:16:24.724846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:27.619 [2024-11-20 07:16:24.725158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.619 [2024-11-20 07:16:24.725201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:27.619 [2024-11-20 07:16:24.725221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.619 [2024-11-20 07:16:24.728148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.619 [2024-11-20 07:16:24.728223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:27.619 spare 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.619 [2024-11-20 07:16:24.733001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.619 [2024-11-20 07:16:24.735554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:27.619 [2024-11-20 07:16:24.735648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:27.619 [2024-11-20 07:16:24.735780] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:27.619 [2024-11-20 07:16:24.735798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:19:27.619 [2024-11-20 07:16:24.736353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:27.619 [2024-11-20 07:16:24.741789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:27.619 [2024-11-20 07:16:24.742005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:27.619 [2024-11-20 07:16:24.742466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:27.619 "name": "raid_bdev1", 00:19:27.619 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:27.619 "strip_size_kb": 64, 00:19:27.619 "state": "online", 00:19:27.619 "raid_level": "raid5f", 00:19:27.619 "superblock": false, 00:19:27.619 "num_base_bdevs": 3, 00:19:27.619 "num_base_bdevs_discovered": 3, 00:19:27.619 "num_base_bdevs_operational": 3, 00:19:27.619 "base_bdevs_list": [ 00:19:27.619 { 00:19:27.619 "name": "BaseBdev1", 00:19:27.619 "uuid": "466205a4-b513-5ffe-85dc-a1a96fbf8f10", 00:19:27.619 "is_configured": true, 00:19:27.619 "data_offset": 0, 00:19:27.619 "data_size": 65536 00:19:27.619 }, 00:19:27.619 { 00:19:27.619 "name": "BaseBdev2", 00:19:27.619 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:27.619 "is_configured": true, 00:19:27.619 "data_offset": 0, 00:19:27.619 "data_size": 65536 00:19:27.619 }, 00:19:27.619 { 00:19:27.619 "name": "BaseBdev3", 00:19:27.619 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:27.619 "is_configured": true, 00:19:27.619 "data_offset": 0, 00:19:27.619 "data_size": 65536 00:19:27.619 } 00:19:27.619 ] 00:19:27.619 }' 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:27.619 07:16:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.184 [2024-11-20 07:16:25.292744] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:28.184 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:28.482 [2024-11-20 07:16:25.652699] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:28.482 /dev/nbd0 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:28.482 1+0 records in 00:19:28.482 1+0 records out 00:19:28.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285958 s, 14.3 MB/s 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:19:28.482 07:16:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:19:29.051 512+0 records in 00:19:29.051 512+0 records out 00:19:29.051 67108864 bytes (67 MB, 64 MiB) copied, 0.497575 s, 135 MB/s 00:19:29.051 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:29.051 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:29.051 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:29.051 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:29.051 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:29.051 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.051 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:29.309 [2024-11-20 07:16:26.504691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.309 [2024-11-20 07:16:26.526476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.309 "name": "raid_bdev1", 00:19:29.309 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:29.309 "strip_size_kb": 64, 00:19:29.309 "state": "online", 00:19:29.309 "raid_level": "raid5f", 00:19:29.309 "superblock": false, 00:19:29.309 "num_base_bdevs": 3, 00:19:29.309 "num_base_bdevs_discovered": 2, 00:19:29.309 "num_base_bdevs_operational": 2, 00:19:29.309 "base_bdevs_list": [ 00:19:29.309 { 00:19:29.309 "name": null, 00:19:29.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.309 "is_configured": false, 00:19:29.309 "data_offset": 0, 00:19:29.309 "data_size": 65536 00:19:29.309 }, 00:19:29.309 { 00:19:29.309 "name": "BaseBdev2", 00:19:29.309 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:29.309 "is_configured": true, 00:19:29.309 "data_offset": 0, 00:19:29.309 "data_size": 65536 00:19:29.309 }, 00:19:29.309 { 00:19:29.309 "name": "BaseBdev3", 00:19:29.309 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:29.309 "is_configured": true, 00:19:29.309 "data_offset": 0, 00:19:29.309 "data_size": 65536 00:19:29.309 } 00:19:29.309 ] 00:19:29.309 }' 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.309 07:16:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.875 07:16:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:29.875 07:16:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.875 07:16:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.875 [2024-11-20 07:16:27.002579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:29.875 [2024-11-20 07:16:27.018199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:19:29.875 07:16:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.875 07:16:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:29.875 [2024-11-20 07:16:27.025573] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.810 "name": "raid_bdev1", 00:19:30.810 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:30.810 "strip_size_kb": 64, 00:19:30.810 "state": "online", 00:19:30.810 "raid_level": "raid5f", 00:19:30.810 "superblock": false, 00:19:30.810 "num_base_bdevs": 3, 00:19:30.810 "num_base_bdevs_discovered": 3, 00:19:30.810 "num_base_bdevs_operational": 3, 00:19:30.810 "process": { 00:19:30.810 "type": "rebuild", 00:19:30.810 "target": "spare", 00:19:30.810 "progress": { 00:19:30.810 "blocks": 18432, 00:19:30.810 "percent": 14 00:19:30.810 } 00:19:30.810 }, 00:19:30.810 "base_bdevs_list": [ 00:19:30.810 { 00:19:30.810 "name": "spare", 00:19:30.810 "uuid": "987f23d6-d67d-5d55-9ade-87adbfe709fe", 00:19:30.810 "is_configured": true, 00:19:30.810 "data_offset": 0, 00:19:30.810 "data_size": 65536 00:19:30.810 }, 00:19:30.810 { 00:19:30.810 "name": "BaseBdev2", 00:19:30.810 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:30.810 "is_configured": true, 00:19:30.810 "data_offset": 0, 00:19:30.810 "data_size": 65536 00:19:30.810 }, 00:19:30.810 { 00:19:30.810 "name": "BaseBdev3", 00:19:30.810 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:30.810 "is_configured": true, 00:19:30.810 "data_offset": 0, 00:19:30.810 "data_size": 65536 00:19:30.810 } 00:19:30.810 ] 00:19:30.810 }' 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:30.810 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.069 [2024-11-20 07:16:28.175886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.069 [2024-11-20 07:16:28.241513] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:31.069 [2024-11-20 07:16:28.241635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.069 [2024-11-20 07:16:28.241665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.069 [2024-11-20 07:16:28.241678] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.069 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.069 "name": "raid_bdev1", 00:19:31.069 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:31.069 "strip_size_kb": 64, 00:19:31.069 "state": "online", 00:19:31.069 "raid_level": "raid5f", 00:19:31.069 "superblock": false, 00:19:31.069 "num_base_bdevs": 3, 00:19:31.069 "num_base_bdevs_discovered": 2, 00:19:31.069 "num_base_bdevs_operational": 2, 00:19:31.069 "base_bdevs_list": [ 00:19:31.069 { 00:19:31.069 "name": null, 00:19:31.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.069 "is_configured": false, 00:19:31.069 "data_offset": 0, 00:19:31.069 "data_size": 65536 00:19:31.069 }, 00:19:31.069 { 00:19:31.069 "name": "BaseBdev2", 00:19:31.069 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:31.069 "is_configured": true, 00:19:31.069 "data_offset": 0, 00:19:31.069 "data_size": 65536 00:19:31.069 }, 00:19:31.069 { 00:19:31.069 "name": "BaseBdev3", 00:19:31.070 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:31.070 "is_configured": true, 00:19:31.070 "data_offset": 0, 00:19:31.070 "data_size": 65536 00:19:31.070 } 00:19:31.070 ] 00:19:31.070 }' 00:19:31.070 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.070 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.638 "name": "raid_bdev1", 00:19:31.638 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:31.638 "strip_size_kb": 64, 00:19:31.638 "state": "online", 00:19:31.638 "raid_level": "raid5f", 00:19:31.638 "superblock": false, 00:19:31.638 "num_base_bdevs": 3, 00:19:31.638 "num_base_bdevs_discovered": 2, 00:19:31.638 "num_base_bdevs_operational": 2, 00:19:31.638 "base_bdevs_list": [ 00:19:31.638 { 00:19:31.638 "name": null, 00:19:31.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.638 "is_configured": false, 00:19:31.638 "data_offset": 0, 00:19:31.638 "data_size": 65536 00:19:31.638 }, 00:19:31.638 { 00:19:31.638 "name": "BaseBdev2", 00:19:31.638 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:31.638 "is_configured": true, 00:19:31.638 "data_offset": 0, 00:19:31.638 "data_size": 65536 00:19:31.638 }, 00:19:31.638 { 00:19:31.638 "name": "BaseBdev3", 00:19:31.638 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:31.638 "is_configured": true, 00:19:31.638 "data_offset": 0, 00:19:31.638 "data_size": 65536 00:19:31.638 } 00:19:31.638 ] 00:19:31.638 }' 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.638 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.638 [2024-11-20 07:16:28.948926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:31.898 [2024-11-20 07:16:28.964014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:19:31.898 07:16:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.898 07:16:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:31.898 [2024-11-20 07:16:28.971533] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:32.836 07:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.836 07:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.836 07:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.836 07:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.836 07:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.836 07:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.836 07:16:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.836 07:16:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.836 07:16:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.836 07:16:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.836 "name": "raid_bdev1", 00:19:32.836 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:32.836 "strip_size_kb": 64, 00:19:32.836 "state": "online", 00:19:32.836 "raid_level": "raid5f", 00:19:32.836 "superblock": false, 00:19:32.836 "num_base_bdevs": 3, 00:19:32.836 "num_base_bdevs_discovered": 3, 00:19:32.836 "num_base_bdevs_operational": 3, 00:19:32.836 "process": { 00:19:32.836 "type": "rebuild", 00:19:32.836 "target": "spare", 00:19:32.836 "progress": { 00:19:32.836 "blocks": 18432, 00:19:32.836 "percent": 14 00:19:32.836 } 00:19:32.836 }, 00:19:32.836 "base_bdevs_list": [ 00:19:32.836 { 00:19:32.836 "name": "spare", 00:19:32.836 "uuid": "987f23d6-d67d-5d55-9ade-87adbfe709fe", 00:19:32.836 "is_configured": true, 00:19:32.836 "data_offset": 0, 00:19:32.836 "data_size": 65536 00:19:32.836 }, 00:19:32.836 { 00:19:32.836 "name": "BaseBdev2", 00:19:32.836 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:32.836 "is_configured": true, 00:19:32.836 "data_offset": 0, 00:19:32.836 "data_size": 65536 00:19:32.836 }, 00:19:32.836 { 00:19:32.836 "name": "BaseBdev3", 00:19:32.836 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:32.836 "is_configured": true, 00:19:32.836 "data_offset": 0, 00:19:32.836 "data_size": 65536 00:19:32.836 } 00:19:32.836 ] 00:19:32.836 }' 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=595 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.836 07:16:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.096 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:33.096 "name": "raid_bdev1", 00:19:33.096 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:33.096 "strip_size_kb": 64, 00:19:33.096 "state": "online", 00:19:33.096 "raid_level": "raid5f", 00:19:33.096 "superblock": false, 00:19:33.096 "num_base_bdevs": 3, 00:19:33.096 "num_base_bdevs_discovered": 3, 00:19:33.096 "num_base_bdevs_operational": 3, 00:19:33.096 "process": { 00:19:33.096 "type": "rebuild", 00:19:33.096 "target": "spare", 00:19:33.096 "progress": { 00:19:33.096 "blocks": 22528, 00:19:33.096 "percent": 17 00:19:33.096 } 00:19:33.096 }, 00:19:33.096 "base_bdevs_list": [ 00:19:33.096 { 00:19:33.096 "name": "spare", 00:19:33.096 "uuid": "987f23d6-d67d-5d55-9ade-87adbfe709fe", 00:19:33.096 "is_configured": true, 00:19:33.096 "data_offset": 0, 00:19:33.096 "data_size": 65536 00:19:33.096 }, 00:19:33.096 { 00:19:33.096 "name": "BaseBdev2", 00:19:33.096 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:33.096 "is_configured": true, 00:19:33.096 "data_offset": 0, 00:19:33.096 "data_size": 65536 00:19:33.096 }, 00:19:33.096 { 00:19:33.096 "name": "BaseBdev3", 00:19:33.096 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:33.096 "is_configured": true, 00:19:33.096 "data_offset": 0, 00:19:33.096 "data_size": 65536 00:19:33.096 } 00:19:33.096 ] 00:19:33.096 }' 00:19:33.096 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:33.096 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:33.096 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:33.096 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:33.096 07:16:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:34.033 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:34.033 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.033 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.034 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:34.034 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:34.034 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.034 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.034 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.034 07:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.034 07:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.034 07:16:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.034 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.034 "name": "raid_bdev1", 00:19:34.034 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:34.034 "strip_size_kb": 64, 00:19:34.034 "state": "online", 00:19:34.034 "raid_level": "raid5f", 00:19:34.034 "superblock": false, 00:19:34.034 "num_base_bdevs": 3, 00:19:34.034 "num_base_bdevs_discovered": 3, 00:19:34.034 "num_base_bdevs_operational": 3, 00:19:34.034 "process": { 00:19:34.034 "type": "rebuild", 00:19:34.034 "target": "spare", 00:19:34.034 "progress": { 00:19:34.034 "blocks": 45056, 00:19:34.034 "percent": 34 00:19:34.034 } 00:19:34.034 }, 00:19:34.034 "base_bdevs_list": [ 00:19:34.034 { 00:19:34.034 "name": "spare", 00:19:34.034 "uuid": "987f23d6-d67d-5d55-9ade-87adbfe709fe", 00:19:34.034 "is_configured": true, 00:19:34.034 "data_offset": 0, 00:19:34.034 "data_size": 65536 00:19:34.034 }, 00:19:34.034 { 00:19:34.034 "name": "BaseBdev2", 00:19:34.034 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:34.034 "is_configured": true, 00:19:34.034 "data_offset": 0, 00:19:34.034 "data_size": 65536 00:19:34.034 }, 00:19:34.034 { 00:19:34.034 "name": "BaseBdev3", 00:19:34.034 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:34.034 "is_configured": true, 00:19:34.034 "data_offset": 0, 00:19:34.034 "data_size": 65536 00:19:34.034 } 00:19:34.034 ] 00:19:34.034 }' 00:19:34.034 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.293 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.293 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.293 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.293 07:16:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:35.230 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:35.230 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.230 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.230 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:35.230 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:35.230 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.230 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.230 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.230 07:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.230 07:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.230 07:16:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.230 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.230 "name": "raid_bdev1", 00:19:35.230 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:35.230 "strip_size_kb": 64, 00:19:35.230 "state": "online", 00:19:35.230 "raid_level": "raid5f", 00:19:35.230 "superblock": false, 00:19:35.230 "num_base_bdevs": 3, 00:19:35.230 "num_base_bdevs_discovered": 3, 00:19:35.230 "num_base_bdevs_operational": 3, 00:19:35.230 "process": { 00:19:35.230 "type": "rebuild", 00:19:35.230 "target": "spare", 00:19:35.230 "progress": { 00:19:35.230 "blocks": 69632, 00:19:35.230 "percent": 53 00:19:35.230 } 00:19:35.230 }, 00:19:35.230 "base_bdevs_list": [ 00:19:35.230 { 00:19:35.230 "name": "spare", 00:19:35.230 "uuid": "987f23d6-d67d-5d55-9ade-87adbfe709fe", 00:19:35.230 "is_configured": true, 00:19:35.230 "data_offset": 0, 00:19:35.230 "data_size": 65536 00:19:35.230 }, 00:19:35.230 { 00:19:35.230 "name": "BaseBdev2", 00:19:35.230 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:35.230 "is_configured": true, 00:19:35.230 "data_offset": 0, 00:19:35.230 "data_size": 65536 00:19:35.230 }, 00:19:35.230 { 00:19:35.230 "name": "BaseBdev3", 00:19:35.230 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:35.230 "is_configured": true, 00:19:35.230 "data_offset": 0, 00:19:35.230 "data_size": 65536 00:19:35.230 } 00:19:35.230 ] 00:19:35.230 }' 00:19:35.230 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.489 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.489 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.489 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.489 07:16:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.425 "name": "raid_bdev1", 00:19:36.425 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:36.425 "strip_size_kb": 64, 00:19:36.425 "state": "online", 00:19:36.425 "raid_level": "raid5f", 00:19:36.425 "superblock": false, 00:19:36.425 "num_base_bdevs": 3, 00:19:36.425 "num_base_bdevs_discovered": 3, 00:19:36.425 "num_base_bdevs_operational": 3, 00:19:36.425 "process": { 00:19:36.425 "type": "rebuild", 00:19:36.425 "target": "spare", 00:19:36.425 "progress": { 00:19:36.425 "blocks": 92160, 00:19:36.425 "percent": 70 00:19:36.425 } 00:19:36.425 }, 00:19:36.425 "base_bdevs_list": [ 00:19:36.425 { 00:19:36.425 "name": "spare", 00:19:36.425 "uuid": "987f23d6-d67d-5d55-9ade-87adbfe709fe", 00:19:36.425 "is_configured": true, 00:19:36.425 "data_offset": 0, 00:19:36.425 "data_size": 65536 00:19:36.425 }, 00:19:36.425 { 00:19:36.425 "name": "BaseBdev2", 00:19:36.425 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:36.425 "is_configured": true, 00:19:36.425 "data_offset": 0, 00:19:36.425 "data_size": 65536 00:19:36.425 }, 00:19:36.425 { 00:19:36.425 "name": "BaseBdev3", 00:19:36.425 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:36.425 "is_configured": true, 00:19:36.425 "data_offset": 0, 00:19:36.425 "data_size": 65536 00:19:36.425 } 00:19:36.425 ] 00:19:36.425 }' 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:36.425 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.684 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:36.684 07:16:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:37.619 "name": "raid_bdev1", 00:19:37.619 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:37.619 "strip_size_kb": 64, 00:19:37.619 "state": "online", 00:19:37.619 "raid_level": "raid5f", 00:19:37.619 "superblock": false, 00:19:37.619 "num_base_bdevs": 3, 00:19:37.619 "num_base_bdevs_discovered": 3, 00:19:37.619 "num_base_bdevs_operational": 3, 00:19:37.619 "process": { 00:19:37.619 "type": "rebuild", 00:19:37.619 "target": "spare", 00:19:37.619 "progress": { 00:19:37.619 "blocks": 116736, 00:19:37.619 "percent": 89 00:19:37.619 } 00:19:37.619 }, 00:19:37.619 "base_bdevs_list": [ 00:19:37.619 { 00:19:37.619 "name": "spare", 00:19:37.619 "uuid": "987f23d6-d67d-5d55-9ade-87adbfe709fe", 00:19:37.619 "is_configured": true, 00:19:37.619 "data_offset": 0, 00:19:37.619 "data_size": 65536 00:19:37.619 }, 00:19:37.619 { 00:19:37.619 "name": "BaseBdev2", 00:19:37.619 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:37.619 "is_configured": true, 00:19:37.619 "data_offset": 0, 00:19:37.619 "data_size": 65536 00:19:37.619 }, 00:19:37.619 { 00:19:37.619 "name": "BaseBdev3", 00:19:37.619 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:37.619 "is_configured": true, 00:19:37.619 "data_offset": 0, 00:19:37.619 "data_size": 65536 00:19:37.619 } 00:19:37.619 ] 00:19:37.619 }' 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:37.619 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:37.878 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:37.878 07:16:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:38.137 [2024-11-20 07:16:35.450350] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:38.137 [2024-11-20 07:16:35.450470] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:38.137 [2024-11-20 07:16:35.450549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.704 07:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:38.704 07:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:38.704 07:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.704 07:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:38.704 07:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:38.704 07:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.704 07:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.704 07:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.704 07:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.704 07:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.704 07:16:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.704 07:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.704 "name": "raid_bdev1", 00:19:38.704 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:38.704 "strip_size_kb": 64, 00:19:38.704 "state": "online", 00:19:38.704 "raid_level": "raid5f", 00:19:38.704 "superblock": false, 00:19:38.704 "num_base_bdevs": 3, 00:19:38.704 "num_base_bdevs_discovered": 3, 00:19:38.704 "num_base_bdevs_operational": 3, 00:19:38.704 "base_bdevs_list": [ 00:19:38.704 { 00:19:38.704 "name": "spare", 00:19:38.704 "uuid": "987f23d6-d67d-5d55-9ade-87adbfe709fe", 00:19:38.704 "is_configured": true, 00:19:38.704 "data_offset": 0, 00:19:38.704 "data_size": 65536 00:19:38.704 }, 00:19:38.704 { 00:19:38.704 "name": "BaseBdev2", 00:19:38.704 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:38.704 "is_configured": true, 00:19:38.704 "data_offset": 0, 00:19:38.704 "data_size": 65536 00:19:38.704 }, 00:19:38.704 { 00:19:38.704 "name": "BaseBdev3", 00:19:38.704 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:38.704 "is_configured": true, 00:19:38.704 "data_offset": 0, 00:19:38.704 "data_size": 65536 00:19:38.704 } 00:19:38.704 ] 00:19:38.704 }' 00:19:38.705 07:16:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.964 "name": "raid_bdev1", 00:19:38.964 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:38.964 "strip_size_kb": 64, 00:19:38.964 "state": "online", 00:19:38.964 "raid_level": "raid5f", 00:19:38.964 "superblock": false, 00:19:38.964 "num_base_bdevs": 3, 00:19:38.964 "num_base_bdevs_discovered": 3, 00:19:38.964 "num_base_bdevs_operational": 3, 00:19:38.964 "base_bdevs_list": [ 00:19:38.964 { 00:19:38.964 "name": "spare", 00:19:38.964 "uuid": "987f23d6-d67d-5d55-9ade-87adbfe709fe", 00:19:38.964 "is_configured": true, 00:19:38.964 "data_offset": 0, 00:19:38.964 "data_size": 65536 00:19:38.964 }, 00:19:38.964 { 00:19:38.964 "name": "BaseBdev2", 00:19:38.964 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:38.964 "is_configured": true, 00:19:38.964 "data_offset": 0, 00:19:38.964 "data_size": 65536 00:19:38.964 }, 00:19:38.964 { 00:19:38.964 "name": "BaseBdev3", 00:19:38.964 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:38.964 "is_configured": true, 00:19:38.964 "data_offset": 0, 00:19:38.964 "data_size": 65536 00:19:38.964 } 00:19:38.964 ] 00:19:38.964 }' 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:38.964 07:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.223 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:39.223 "name": "raid_bdev1", 00:19:39.223 "uuid": "f1fed449-4ee5-4f0a-b99b-998a16e8558d", 00:19:39.223 "strip_size_kb": 64, 00:19:39.223 "state": "online", 00:19:39.223 "raid_level": "raid5f", 00:19:39.223 "superblock": false, 00:19:39.223 "num_base_bdevs": 3, 00:19:39.223 "num_base_bdevs_discovered": 3, 00:19:39.223 "num_base_bdevs_operational": 3, 00:19:39.223 "base_bdevs_list": [ 00:19:39.223 { 00:19:39.223 "name": "spare", 00:19:39.223 "uuid": "987f23d6-d67d-5d55-9ade-87adbfe709fe", 00:19:39.223 "is_configured": true, 00:19:39.223 "data_offset": 0, 00:19:39.223 "data_size": 65536 00:19:39.223 }, 00:19:39.223 { 00:19:39.223 "name": "BaseBdev2", 00:19:39.223 "uuid": "200eb1e9-e56b-5d7f-965c-bb97537ae96d", 00:19:39.223 "is_configured": true, 00:19:39.223 "data_offset": 0, 00:19:39.223 "data_size": 65536 00:19:39.223 }, 00:19:39.223 { 00:19:39.223 "name": "BaseBdev3", 00:19:39.223 "uuid": "fbb8148c-a606-51f3-895a-50e0384b5405", 00:19:39.223 "is_configured": true, 00:19:39.223 "data_offset": 0, 00:19:39.223 "data_size": 65536 00:19:39.223 } 00:19:39.223 ] 00:19:39.223 }' 00:19:39.223 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:39.223 07:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.482 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:39.482 07:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.482 07:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.482 [2024-11-20 07:16:36.798101] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:39.482 [2024-11-20 07:16:36.798137] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:39.482 [2024-11-20 07:16:36.798249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:39.482 [2024-11-20 07:16:36.798350] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:39.482 [2024-11-20 07:16:36.798374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:39.741 07:16:36 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:40.000 /dev/nbd0 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:40.000 1+0 records in 00:19:40.000 1+0 records out 00:19:40.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286086 s, 14.3 MB/s 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.000 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:40.001 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:40.001 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:40.001 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:40.001 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:40.265 /dev/nbd1 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:40.265 1+0 records in 00:19:40.265 1+0 records out 00:19:40.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320375 s, 12.8 MB/s 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:40.265 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:40.557 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:40.557 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:40.557 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:40.557 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:40.557 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:40.557 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:40.557 07:16:37 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:40.827 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:40.827 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:40.827 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:40.827 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:40.827 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:40.827 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:40.827 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:40.827 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:40.828 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:40.828 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81862 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81862 ']' 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81862 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81862 00:19:41.087 killing process with pid 81862 00:19:41.087 Received shutdown signal, test time was about 60.000000 seconds 00:19:41.087 00:19:41.087 Latency(us) 00:19:41.087 [2024-11-20T07:16:38.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.087 [2024-11-20T07:16:38.407Z] =================================================================================================================== 00:19:41.087 [2024-11-20T07:16:38.407Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81862' 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81862 00:19:41.087 07:16:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81862 00:19:41.087 [2024-11-20 07:16:38.376013] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:41.655 [2024-11-20 07:16:38.724982] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:42.592 00:19:42.592 real 0m16.432s 00:19:42.592 user 0m20.924s 00:19:42.592 sys 0m2.156s 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:42.592 ************************************ 00:19:42.592 END TEST raid5f_rebuild_test 00:19:42.592 ************************************ 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:42.592 07:16:39 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:19:42.592 07:16:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:42.592 07:16:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:42.592 07:16:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.592 ************************************ 00:19:42.592 START TEST raid5f_rebuild_test_sb 00:19:42.592 ************************************ 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:42.592 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:42.593 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82316 00:19:42.593 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:42.593 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82316 00:19:42.593 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82316 ']' 00:19:42.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.593 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.593 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:42.593 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.593 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:42.593 07:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:42.593 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:42.593 Zero copy mechanism will not be used. 00:19:42.593 [2024-11-20 07:16:39.869072] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:19:42.593 [2024-11-20 07:16:39.869253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82316 ] 00:19:42.851 [2024-11-20 07:16:40.043841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.110 [2024-11-20 07:16:40.173588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.110 [2024-11-20 07:16:40.375075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.110 [2024-11-20 07:16:40.375112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.677 BaseBdev1_malloc 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.677 [2024-11-20 07:16:40.959391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:43.677 [2024-11-20 07:16:40.959501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.677 [2024-11-20 07:16:40.959539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:43.677 [2024-11-20 07:16:40.959556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.677 [2024-11-20 07:16:40.962368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.677 [2024-11-20 07:16:40.962434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:43.677 BaseBdev1 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.677 07:16:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.936 BaseBdev2_malloc 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.936 [2024-11-20 07:16:41.015491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:43.936 [2024-11-20 07:16:41.015583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.936 [2024-11-20 07:16:41.015609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:43.936 [2024-11-20 07:16:41.015644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.936 [2024-11-20 07:16:41.018379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.936 [2024-11-20 07:16:41.018442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:43.936 BaseBdev2 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.936 BaseBdev3_malloc 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.936 [2024-11-20 07:16:41.087631] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:43.936 [2024-11-20 07:16:41.087707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.936 [2024-11-20 07:16:41.087742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:43.936 [2024-11-20 07:16:41.087760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.936 [2024-11-20 07:16:41.090552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.936 [2024-11-20 07:16:41.090743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:43.936 BaseBdev3 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.936 spare_malloc 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.936 spare_delay 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.936 [2024-11-20 07:16:41.147950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:43.936 [2024-11-20 07:16:41.148021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.936 [2024-11-20 07:16:41.148049] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:43.936 [2024-11-20 07:16:41.148067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.936 [2024-11-20 07:16:41.150854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.936 [2024-11-20 07:16:41.150923] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:43.936 spare 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.936 [2024-11-20 07:16:41.156031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:43.936 [2024-11-20 07:16:41.158477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:43.936 [2024-11-20 07:16:41.158572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:43.936 [2024-11-20 07:16:41.158815] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:43.936 [2024-11-20 07:16:41.158837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:43.936 [2024-11-20 07:16:41.159184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:43.936 [2024-11-20 07:16:41.164527] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:43.936 [2024-11-20 07:16:41.164680] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:43.936 [2024-11-20 07:16:41.165101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.936 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.937 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.937 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.937 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.937 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.937 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:43.937 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.937 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.937 "name": "raid_bdev1", 00:19:43.937 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:43.937 "strip_size_kb": 64, 00:19:43.937 "state": "online", 00:19:43.937 "raid_level": "raid5f", 00:19:43.937 "superblock": true, 00:19:43.937 "num_base_bdevs": 3, 00:19:43.937 "num_base_bdevs_discovered": 3, 00:19:43.937 "num_base_bdevs_operational": 3, 00:19:43.937 "base_bdevs_list": [ 00:19:43.937 { 00:19:43.937 "name": "BaseBdev1", 00:19:43.937 "uuid": "99e2cb85-7b34-556d-ada5-b217adfc462c", 00:19:43.937 "is_configured": true, 00:19:43.937 "data_offset": 2048, 00:19:43.937 "data_size": 63488 00:19:43.937 }, 00:19:43.937 { 00:19:43.937 "name": "BaseBdev2", 00:19:43.937 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:43.937 "is_configured": true, 00:19:43.937 "data_offset": 2048, 00:19:43.937 "data_size": 63488 00:19:43.937 }, 00:19:43.937 { 00:19:43.937 "name": "BaseBdev3", 00:19:43.937 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:43.937 "is_configured": true, 00:19:43.937 "data_offset": 2048, 00:19:43.937 "data_size": 63488 00:19:43.937 } 00:19:43.937 ] 00:19:43.937 }' 00:19:43.937 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.937 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.513 [2024-11-20 07:16:41.691321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:44.513 07:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:44.802 [2024-11-20 07:16:42.111221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:45.061 /dev/nbd0 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:45.061 1+0 records in 00:19:45.061 1+0 records out 00:19:45.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613179 s, 6.7 MB/s 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:19:45.061 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:19:45.628 496+0 records in 00:19:45.628 496+0 records out 00:19:45.628 65011712 bytes (65 MB, 62 MiB) copied, 0.473032 s, 137 MB/s 00:19:45.628 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:45.628 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:45.628 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:45.628 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:45.628 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:45.628 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:45.628 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:45.628 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:45.628 [2024-11-20 07:16:42.938511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:45.886 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:45.886 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:45.886 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:45.886 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:45.886 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:45.886 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:45.886 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:45.886 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:45.886 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.886 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.886 [2024-11-20 07:16:42.956328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.886 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:45.887 07:16:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.887 07:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:45.887 "name": "raid_bdev1", 00:19:45.887 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:45.887 "strip_size_kb": 64, 00:19:45.887 "state": "online", 00:19:45.887 "raid_level": "raid5f", 00:19:45.887 "superblock": true, 00:19:45.887 "num_base_bdevs": 3, 00:19:45.887 "num_base_bdevs_discovered": 2, 00:19:45.887 "num_base_bdevs_operational": 2, 00:19:45.887 "base_bdevs_list": [ 00:19:45.887 { 00:19:45.887 "name": null, 00:19:45.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.887 "is_configured": false, 00:19:45.887 "data_offset": 0, 00:19:45.887 "data_size": 63488 00:19:45.887 }, 00:19:45.887 { 00:19:45.887 "name": "BaseBdev2", 00:19:45.887 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:45.887 "is_configured": true, 00:19:45.887 "data_offset": 2048, 00:19:45.887 "data_size": 63488 00:19:45.887 }, 00:19:45.887 { 00:19:45.887 "name": "BaseBdev3", 00:19:45.887 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:45.887 "is_configured": true, 00:19:45.887 "data_offset": 2048, 00:19:45.887 "data_size": 63488 00:19:45.887 } 00:19:45.887 ] 00:19:45.887 }' 00:19:45.887 07:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:45.887 07:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.145 07:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:46.145 07:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.145 07:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:46.145 [2024-11-20 07:16:43.460468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:46.405 [2024-11-20 07:16:43.476406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:19:46.405 07:16:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.405 07:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:46.405 [2024-11-20 07:16:43.484103] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:47.342 "name": "raid_bdev1", 00:19:47.342 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:47.342 "strip_size_kb": 64, 00:19:47.342 "state": "online", 00:19:47.342 "raid_level": "raid5f", 00:19:47.342 "superblock": true, 00:19:47.342 "num_base_bdevs": 3, 00:19:47.342 "num_base_bdevs_discovered": 3, 00:19:47.342 "num_base_bdevs_operational": 3, 00:19:47.342 "process": { 00:19:47.342 "type": "rebuild", 00:19:47.342 "target": "spare", 00:19:47.342 "progress": { 00:19:47.342 "blocks": 18432, 00:19:47.342 "percent": 14 00:19:47.342 } 00:19:47.342 }, 00:19:47.342 "base_bdevs_list": [ 00:19:47.342 { 00:19:47.342 "name": "spare", 00:19:47.342 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:19:47.342 "is_configured": true, 00:19:47.342 "data_offset": 2048, 00:19:47.342 "data_size": 63488 00:19:47.342 }, 00:19:47.342 { 00:19:47.342 "name": "BaseBdev2", 00:19:47.342 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:47.342 "is_configured": true, 00:19:47.342 "data_offset": 2048, 00:19:47.342 "data_size": 63488 00:19:47.342 }, 00:19:47.342 { 00:19:47.342 "name": "BaseBdev3", 00:19:47.342 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:47.342 "is_configured": true, 00:19:47.342 "data_offset": 2048, 00:19:47.342 "data_size": 63488 00:19:47.342 } 00:19:47.342 ] 00:19:47.342 }' 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.342 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.342 [2024-11-20 07:16:44.642219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.600 [2024-11-20 07:16:44.699758] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:47.600 [2024-11-20 07:16:44.699875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.600 [2024-11-20 07:16:44.699961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:47.600 [2024-11-20 07:16:44.699974] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.600 "name": "raid_bdev1", 00:19:47.600 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:47.600 "strip_size_kb": 64, 00:19:47.600 "state": "online", 00:19:47.600 "raid_level": "raid5f", 00:19:47.600 "superblock": true, 00:19:47.600 "num_base_bdevs": 3, 00:19:47.600 "num_base_bdevs_discovered": 2, 00:19:47.600 "num_base_bdevs_operational": 2, 00:19:47.600 "base_bdevs_list": [ 00:19:47.600 { 00:19:47.600 "name": null, 00:19:47.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:47.600 "is_configured": false, 00:19:47.600 "data_offset": 0, 00:19:47.600 "data_size": 63488 00:19:47.600 }, 00:19:47.600 { 00:19:47.600 "name": "BaseBdev2", 00:19:47.600 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:47.600 "is_configured": true, 00:19:47.600 "data_offset": 2048, 00:19:47.600 "data_size": 63488 00:19:47.600 }, 00:19:47.600 { 00:19:47.600 "name": "BaseBdev3", 00:19:47.600 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:47.600 "is_configured": true, 00:19:47.600 "data_offset": 2048, 00:19:47.600 "data_size": 63488 00:19:47.600 } 00:19:47.600 ] 00:19:47.600 }' 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.600 07:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:48.241 "name": "raid_bdev1", 00:19:48.241 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:48.241 "strip_size_kb": 64, 00:19:48.241 "state": "online", 00:19:48.241 "raid_level": "raid5f", 00:19:48.241 "superblock": true, 00:19:48.241 "num_base_bdevs": 3, 00:19:48.241 "num_base_bdevs_discovered": 2, 00:19:48.241 "num_base_bdevs_operational": 2, 00:19:48.241 "base_bdevs_list": [ 00:19:48.241 { 00:19:48.241 "name": null, 00:19:48.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.241 "is_configured": false, 00:19:48.241 "data_offset": 0, 00:19:48.241 "data_size": 63488 00:19:48.241 }, 00:19:48.241 { 00:19:48.241 "name": "BaseBdev2", 00:19:48.241 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:48.241 "is_configured": true, 00:19:48.241 "data_offset": 2048, 00:19:48.241 "data_size": 63488 00:19:48.241 }, 00:19:48.241 { 00:19:48.241 "name": "BaseBdev3", 00:19:48.241 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:48.241 "is_configured": true, 00:19:48.241 "data_offset": 2048, 00:19:48.241 "data_size": 63488 00:19:48.241 } 00:19:48.241 ] 00:19:48.241 }' 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:48.241 [2024-11-20 07:16:45.419166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.241 [2024-11-20 07:16:45.434369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.241 07:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:48.241 [2024-11-20 07:16:45.441973] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:49.178 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.178 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.178 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.178 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.178 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.178 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.178 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.178 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.178 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.178 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.438 "name": "raid_bdev1", 00:19:49.438 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:49.438 "strip_size_kb": 64, 00:19:49.438 "state": "online", 00:19:49.438 "raid_level": "raid5f", 00:19:49.438 "superblock": true, 00:19:49.438 "num_base_bdevs": 3, 00:19:49.438 "num_base_bdevs_discovered": 3, 00:19:49.438 "num_base_bdevs_operational": 3, 00:19:49.438 "process": { 00:19:49.438 "type": "rebuild", 00:19:49.438 "target": "spare", 00:19:49.438 "progress": { 00:19:49.438 "blocks": 18432, 00:19:49.438 "percent": 14 00:19:49.438 } 00:19:49.438 }, 00:19:49.438 "base_bdevs_list": [ 00:19:49.438 { 00:19:49.438 "name": "spare", 00:19:49.438 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:19:49.438 "is_configured": true, 00:19:49.438 "data_offset": 2048, 00:19:49.438 "data_size": 63488 00:19:49.438 }, 00:19:49.438 { 00:19:49.438 "name": "BaseBdev2", 00:19:49.438 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:49.438 "is_configured": true, 00:19:49.438 "data_offset": 2048, 00:19:49.438 "data_size": 63488 00:19:49.438 }, 00:19:49.438 { 00:19:49.438 "name": "BaseBdev3", 00:19:49.438 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:49.438 "is_configured": true, 00:19:49.438 "data_offset": 2048, 00:19:49.438 "data_size": 63488 00:19:49.438 } 00:19:49.438 ] 00:19:49.438 }' 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:49.438 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=611 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:49.438 "name": "raid_bdev1", 00:19:49.438 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:49.438 "strip_size_kb": 64, 00:19:49.438 "state": "online", 00:19:49.438 "raid_level": "raid5f", 00:19:49.438 "superblock": true, 00:19:49.438 "num_base_bdevs": 3, 00:19:49.438 "num_base_bdevs_discovered": 3, 00:19:49.438 "num_base_bdevs_operational": 3, 00:19:49.438 "process": { 00:19:49.438 "type": "rebuild", 00:19:49.438 "target": "spare", 00:19:49.438 "progress": { 00:19:49.438 "blocks": 22528, 00:19:49.438 "percent": 17 00:19:49.438 } 00:19:49.438 }, 00:19:49.438 "base_bdevs_list": [ 00:19:49.438 { 00:19:49.438 "name": "spare", 00:19:49.438 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:19:49.438 "is_configured": true, 00:19:49.438 "data_offset": 2048, 00:19:49.438 "data_size": 63488 00:19:49.438 }, 00:19:49.438 { 00:19:49.438 "name": "BaseBdev2", 00:19:49.438 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:49.438 "is_configured": true, 00:19:49.438 "data_offset": 2048, 00:19:49.438 "data_size": 63488 00:19:49.438 }, 00:19:49.438 { 00:19:49.438 "name": "BaseBdev3", 00:19:49.438 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:49.438 "is_configured": true, 00:19:49.438 "data_offset": 2048, 00:19:49.438 "data_size": 63488 00:19:49.438 } 00:19:49.438 ] 00:19:49.438 }' 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:49.438 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:49.698 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:49.698 07:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:50.634 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:50.634 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:50.634 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:50.634 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:50.634 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:50.634 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:50.634 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:50.634 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.634 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:50.634 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.634 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.634 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:50.634 "name": "raid_bdev1", 00:19:50.634 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:50.634 "strip_size_kb": 64, 00:19:50.634 "state": "online", 00:19:50.634 "raid_level": "raid5f", 00:19:50.634 "superblock": true, 00:19:50.634 "num_base_bdevs": 3, 00:19:50.634 "num_base_bdevs_discovered": 3, 00:19:50.634 "num_base_bdevs_operational": 3, 00:19:50.634 "process": { 00:19:50.634 "type": "rebuild", 00:19:50.634 "target": "spare", 00:19:50.634 "progress": { 00:19:50.634 "blocks": 47104, 00:19:50.634 "percent": 37 00:19:50.634 } 00:19:50.634 }, 00:19:50.634 "base_bdevs_list": [ 00:19:50.634 { 00:19:50.634 "name": "spare", 00:19:50.635 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:19:50.635 "is_configured": true, 00:19:50.635 "data_offset": 2048, 00:19:50.635 "data_size": 63488 00:19:50.635 }, 00:19:50.635 { 00:19:50.635 "name": "BaseBdev2", 00:19:50.635 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:50.635 "is_configured": true, 00:19:50.635 "data_offset": 2048, 00:19:50.635 "data_size": 63488 00:19:50.635 }, 00:19:50.635 { 00:19:50.635 "name": "BaseBdev3", 00:19:50.635 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:50.635 "is_configured": true, 00:19:50.635 "data_offset": 2048, 00:19:50.635 "data_size": 63488 00:19:50.635 } 00:19:50.635 ] 00:19:50.635 }' 00:19:50.635 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:50.635 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.635 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:50.894 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.894 07:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:51.832 07:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:51.832 07:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.832 07:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.832 07:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.832 07:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.832 07:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.832 07:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.832 07:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.832 07:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:51.832 07:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.832 07:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.832 07:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.832 "name": "raid_bdev1", 00:19:51.832 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:51.832 "strip_size_kb": 64, 00:19:51.832 "state": "online", 00:19:51.832 "raid_level": "raid5f", 00:19:51.832 "superblock": true, 00:19:51.832 "num_base_bdevs": 3, 00:19:51.832 "num_base_bdevs_discovered": 3, 00:19:51.832 "num_base_bdevs_operational": 3, 00:19:51.832 "process": { 00:19:51.832 "type": "rebuild", 00:19:51.832 "target": "spare", 00:19:51.832 "progress": { 00:19:51.832 "blocks": 69632, 00:19:51.832 "percent": 54 00:19:51.832 } 00:19:51.832 }, 00:19:51.832 "base_bdevs_list": [ 00:19:51.832 { 00:19:51.832 "name": "spare", 00:19:51.832 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:19:51.832 "is_configured": true, 00:19:51.832 "data_offset": 2048, 00:19:51.832 "data_size": 63488 00:19:51.832 }, 00:19:51.832 { 00:19:51.832 "name": "BaseBdev2", 00:19:51.832 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:51.832 "is_configured": true, 00:19:51.832 "data_offset": 2048, 00:19:51.832 "data_size": 63488 00:19:51.832 }, 00:19:51.832 { 00:19:51.832 "name": "BaseBdev3", 00:19:51.832 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:51.832 "is_configured": true, 00:19:51.832 "data_offset": 2048, 00:19:51.832 "data_size": 63488 00:19:51.832 } 00:19:51.832 ] 00:19:51.832 }' 00:19:51.832 07:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.832 07:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.832 07:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.096 07:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.096 07:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.042 "name": "raid_bdev1", 00:19:53.042 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:53.042 "strip_size_kb": 64, 00:19:53.042 "state": "online", 00:19:53.042 "raid_level": "raid5f", 00:19:53.042 "superblock": true, 00:19:53.042 "num_base_bdevs": 3, 00:19:53.042 "num_base_bdevs_discovered": 3, 00:19:53.042 "num_base_bdevs_operational": 3, 00:19:53.042 "process": { 00:19:53.042 "type": "rebuild", 00:19:53.042 "target": "spare", 00:19:53.042 "progress": { 00:19:53.042 "blocks": 94208, 00:19:53.042 "percent": 74 00:19:53.042 } 00:19:53.042 }, 00:19:53.042 "base_bdevs_list": [ 00:19:53.042 { 00:19:53.042 "name": "spare", 00:19:53.042 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:19:53.042 "is_configured": true, 00:19:53.042 "data_offset": 2048, 00:19:53.042 "data_size": 63488 00:19:53.042 }, 00:19:53.042 { 00:19:53.042 "name": "BaseBdev2", 00:19:53.042 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:53.042 "is_configured": true, 00:19:53.042 "data_offset": 2048, 00:19:53.042 "data_size": 63488 00:19:53.042 }, 00:19:53.042 { 00:19:53.042 "name": "BaseBdev3", 00:19:53.042 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:53.042 "is_configured": true, 00:19:53.042 "data_offset": 2048, 00:19:53.042 "data_size": 63488 00:19:53.042 } 00:19:53.042 ] 00:19:53.042 }' 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.042 07:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:54.419 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:54.419 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.419 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.419 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.419 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.419 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.419 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.419 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.419 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.419 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:54.419 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.419 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.419 "name": "raid_bdev1", 00:19:54.419 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:54.419 "strip_size_kb": 64, 00:19:54.419 "state": "online", 00:19:54.419 "raid_level": "raid5f", 00:19:54.419 "superblock": true, 00:19:54.419 "num_base_bdevs": 3, 00:19:54.419 "num_base_bdevs_discovered": 3, 00:19:54.419 "num_base_bdevs_operational": 3, 00:19:54.419 "process": { 00:19:54.419 "type": "rebuild", 00:19:54.419 "target": "spare", 00:19:54.419 "progress": { 00:19:54.419 "blocks": 118784, 00:19:54.419 "percent": 93 00:19:54.419 } 00:19:54.419 }, 00:19:54.419 "base_bdevs_list": [ 00:19:54.419 { 00:19:54.419 "name": "spare", 00:19:54.419 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:19:54.419 "is_configured": true, 00:19:54.419 "data_offset": 2048, 00:19:54.419 "data_size": 63488 00:19:54.419 }, 00:19:54.419 { 00:19:54.419 "name": "BaseBdev2", 00:19:54.419 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:54.419 "is_configured": true, 00:19:54.419 "data_offset": 2048, 00:19:54.419 "data_size": 63488 00:19:54.419 }, 00:19:54.419 { 00:19:54.419 "name": "BaseBdev3", 00:19:54.419 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:54.420 "is_configured": true, 00:19:54.420 "data_offset": 2048, 00:19:54.420 "data_size": 63488 00:19:54.420 } 00:19:54.420 ] 00:19:54.420 }' 00:19:54.420 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.420 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.420 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.420 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.420 07:16:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:54.420 [2024-11-20 07:16:51.721645] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:54.420 [2024-11-20 07:16:51.721790] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:54.420 [2024-11-20 07:16:51.721995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.446 "name": "raid_bdev1", 00:19:55.446 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:55.446 "strip_size_kb": 64, 00:19:55.446 "state": "online", 00:19:55.446 "raid_level": "raid5f", 00:19:55.446 "superblock": true, 00:19:55.446 "num_base_bdevs": 3, 00:19:55.446 "num_base_bdevs_discovered": 3, 00:19:55.446 "num_base_bdevs_operational": 3, 00:19:55.446 "base_bdevs_list": [ 00:19:55.446 { 00:19:55.446 "name": "spare", 00:19:55.446 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:19:55.446 "is_configured": true, 00:19:55.446 "data_offset": 2048, 00:19:55.446 "data_size": 63488 00:19:55.446 }, 00:19:55.446 { 00:19:55.446 "name": "BaseBdev2", 00:19:55.446 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:55.446 "is_configured": true, 00:19:55.446 "data_offset": 2048, 00:19:55.446 "data_size": 63488 00:19:55.446 }, 00:19:55.446 { 00:19:55.446 "name": "BaseBdev3", 00:19:55.446 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:55.446 "is_configured": true, 00:19:55.446 "data_offset": 2048, 00:19:55.446 "data_size": 63488 00:19:55.446 } 00:19:55.446 ] 00:19:55.446 }' 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.446 "name": "raid_bdev1", 00:19:55.446 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:55.446 "strip_size_kb": 64, 00:19:55.446 "state": "online", 00:19:55.446 "raid_level": "raid5f", 00:19:55.446 "superblock": true, 00:19:55.446 "num_base_bdevs": 3, 00:19:55.446 "num_base_bdevs_discovered": 3, 00:19:55.446 "num_base_bdevs_operational": 3, 00:19:55.446 "base_bdevs_list": [ 00:19:55.446 { 00:19:55.446 "name": "spare", 00:19:55.446 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:19:55.446 "is_configured": true, 00:19:55.446 "data_offset": 2048, 00:19:55.446 "data_size": 63488 00:19:55.446 }, 00:19:55.446 { 00:19:55.446 "name": "BaseBdev2", 00:19:55.446 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:55.446 "is_configured": true, 00:19:55.446 "data_offset": 2048, 00:19:55.446 "data_size": 63488 00:19:55.446 }, 00:19:55.446 { 00:19:55.446 "name": "BaseBdev3", 00:19:55.446 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:55.446 "is_configured": true, 00:19:55.446 "data_offset": 2048, 00:19:55.446 "data_size": 63488 00:19:55.446 } 00:19:55.446 ] 00:19:55.446 }' 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.446 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.705 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.705 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:55.705 "name": "raid_bdev1", 00:19:55.705 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:55.705 "strip_size_kb": 64, 00:19:55.705 "state": "online", 00:19:55.705 "raid_level": "raid5f", 00:19:55.705 "superblock": true, 00:19:55.705 "num_base_bdevs": 3, 00:19:55.705 "num_base_bdevs_discovered": 3, 00:19:55.705 "num_base_bdevs_operational": 3, 00:19:55.705 "base_bdevs_list": [ 00:19:55.705 { 00:19:55.705 "name": "spare", 00:19:55.705 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:19:55.705 "is_configured": true, 00:19:55.705 "data_offset": 2048, 00:19:55.705 "data_size": 63488 00:19:55.705 }, 00:19:55.705 { 00:19:55.705 "name": "BaseBdev2", 00:19:55.705 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:55.705 "is_configured": true, 00:19:55.705 "data_offset": 2048, 00:19:55.705 "data_size": 63488 00:19:55.705 }, 00:19:55.705 { 00:19:55.705 "name": "BaseBdev3", 00:19:55.705 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:55.705 "is_configured": true, 00:19:55.705 "data_offset": 2048, 00:19:55.705 "data_size": 63488 00:19:55.705 } 00:19:55.705 ] 00:19:55.705 }' 00:19:55.705 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:55.705 07:16:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.273 [2024-11-20 07:16:53.305848] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:56.273 [2024-11-20 07:16:53.305901] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:56.273 [2024-11-20 07:16:53.306031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:56.273 [2024-11-20 07:16:53.306142] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:56.273 [2024-11-20 07:16:53.306167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:56.273 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:56.532 /dev/nbd0 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.532 1+0 records in 00:19:56.532 1+0 records out 00:19:56.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377816 s, 10.8 MB/s 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:56.532 07:16:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:56.791 /dev/nbd1 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.791 1+0 records in 00:19:56.791 1+0 records out 00:19:56.791 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381718 s, 10.7 MB/s 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:56.791 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:57.050 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:57.050 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:57.050 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:57.050 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:57.050 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:57.050 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.050 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:57.309 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:57.309 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:57.309 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:57.309 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.309 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.309 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:57.309 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:57.309 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.309 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.309 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.568 [2024-11-20 07:16:54.848382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:57.568 [2024-11-20 07:16:54.848457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.568 [2024-11-20 07:16:54.848485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:57.568 [2024-11-20 07:16:54.848504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.568 [2024-11-20 07:16:54.851516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.568 [2024-11-20 07:16:54.851576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:57.568 [2024-11-20 07:16:54.851683] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:57.568 [2024-11-20 07:16:54.851759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.568 [2024-11-20 07:16:54.851949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:57.568 [2024-11-20 07:16:54.852109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:57.568 spare 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.568 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.827 [2024-11-20 07:16:54.952264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:57.827 [2024-11-20 07:16:54.952319] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:19:57.827 [2024-11-20 07:16:54.952740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:19:57.827 [2024-11-20 07:16:54.957734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:57.827 [2024-11-20 07:16:54.957766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:57.827 [2024-11-20 07:16:54.958052] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:57.827 07:16:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.827 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:57.827 "name": "raid_bdev1", 00:19:57.827 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:57.827 "strip_size_kb": 64, 00:19:57.827 "state": "online", 00:19:57.827 "raid_level": "raid5f", 00:19:57.827 "superblock": true, 00:19:57.827 "num_base_bdevs": 3, 00:19:57.827 "num_base_bdevs_discovered": 3, 00:19:57.827 "num_base_bdevs_operational": 3, 00:19:57.827 "base_bdevs_list": [ 00:19:57.827 { 00:19:57.827 "name": "spare", 00:19:57.827 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:19:57.827 "is_configured": true, 00:19:57.827 "data_offset": 2048, 00:19:57.827 "data_size": 63488 00:19:57.827 }, 00:19:57.827 { 00:19:57.827 "name": "BaseBdev2", 00:19:57.827 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:57.827 "is_configured": true, 00:19:57.827 "data_offset": 2048, 00:19:57.827 "data_size": 63488 00:19:57.827 }, 00:19:57.827 { 00:19:57.827 "name": "BaseBdev3", 00:19:57.827 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:57.827 "is_configured": true, 00:19:57.827 "data_offset": 2048, 00:19:57.827 "data_size": 63488 00:19:57.827 } 00:19:57.827 ] 00:19:57.827 }' 00:19:57.827 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:57.827 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.394 "name": "raid_bdev1", 00:19:58.394 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:58.394 "strip_size_kb": 64, 00:19:58.394 "state": "online", 00:19:58.394 "raid_level": "raid5f", 00:19:58.394 "superblock": true, 00:19:58.394 "num_base_bdevs": 3, 00:19:58.394 "num_base_bdevs_discovered": 3, 00:19:58.394 "num_base_bdevs_operational": 3, 00:19:58.394 "base_bdevs_list": [ 00:19:58.394 { 00:19:58.394 "name": "spare", 00:19:58.394 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:19:58.394 "is_configured": true, 00:19:58.394 "data_offset": 2048, 00:19:58.394 "data_size": 63488 00:19:58.394 }, 00:19:58.394 { 00:19:58.394 "name": "BaseBdev2", 00:19:58.394 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:58.394 "is_configured": true, 00:19:58.394 "data_offset": 2048, 00:19:58.394 "data_size": 63488 00:19:58.394 }, 00:19:58.394 { 00:19:58.394 "name": "BaseBdev3", 00:19:58.394 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:58.394 "is_configured": true, 00:19:58.394 "data_offset": 2048, 00:19:58.394 "data_size": 63488 00:19:58.394 } 00:19:58.394 ] 00:19:58.394 }' 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.394 [2024-11-20 07:16:55.656022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.394 "name": "raid_bdev1", 00:19:58.394 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:19:58.394 "strip_size_kb": 64, 00:19:58.394 "state": "online", 00:19:58.394 "raid_level": "raid5f", 00:19:58.394 "superblock": true, 00:19:58.394 "num_base_bdevs": 3, 00:19:58.394 "num_base_bdevs_discovered": 2, 00:19:58.394 "num_base_bdevs_operational": 2, 00:19:58.394 "base_bdevs_list": [ 00:19:58.394 { 00:19:58.394 "name": null, 00:19:58.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.394 "is_configured": false, 00:19:58.394 "data_offset": 0, 00:19:58.394 "data_size": 63488 00:19:58.394 }, 00:19:58.394 { 00:19:58.394 "name": "BaseBdev2", 00:19:58.394 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:19:58.394 "is_configured": true, 00:19:58.394 "data_offset": 2048, 00:19:58.394 "data_size": 63488 00:19:58.394 }, 00:19:58.394 { 00:19:58.394 "name": "BaseBdev3", 00:19:58.394 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:19:58.394 "is_configured": true, 00:19:58.394 "data_offset": 2048, 00:19:58.394 "data_size": 63488 00:19:58.394 } 00:19:58.394 ] 00:19:58.394 }' 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.394 07:16:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.980 07:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:58.980 07:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.980 07:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.980 [2024-11-20 07:16:56.168168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:58.980 [2024-11-20 07:16:56.168405] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:58.980 [2024-11-20 07:16:56.168444] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:58.980 [2024-11-20 07:16:56.168489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:58.980 [2024-11-20 07:16:56.183202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:19:58.980 07:16:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.980 07:16:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:58.980 [2024-11-20 07:16:56.190340] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:59.914 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:59.914 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:59.914 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:59.914 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:59.914 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:59.914 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.914 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.914 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:59.914 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.914 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.172 "name": "raid_bdev1", 00:20:00.172 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:20:00.172 "strip_size_kb": 64, 00:20:00.172 "state": "online", 00:20:00.172 "raid_level": "raid5f", 00:20:00.172 "superblock": true, 00:20:00.172 "num_base_bdevs": 3, 00:20:00.172 "num_base_bdevs_discovered": 3, 00:20:00.172 "num_base_bdevs_operational": 3, 00:20:00.172 "process": { 00:20:00.172 "type": "rebuild", 00:20:00.172 "target": "spare", 00:20:00.172 "progress": { 00:20:00.172 "blocks": 18432, 00:20:00.172 "percent": 14 00:20:00.172 } 00:20:00.172 }, 00:20:00.172 "base_bdevs_list": [ 00:20:00.172 { 00:20:00.172 "name": "spare", 00:20:00.172 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:20:00.172 "is_configured": true, 00:20:00.172 "data_offset": 2048, 00:20:00.172 "data_size": 63488 00:20:00.172 }, 00:20:00.172 { 00:20:00.172 "name": "BaseBdev2", 00:20:00.172 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:20:00.172 "is_configured": true, 00:20:00.172 "data_offset": 2048, 00:20:00.172 "data_size": 63488 00:20:00.172 }, 00:20:00.172 { 00:20:00.172 "name": "BaseBdev3", 00:20:00.172 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:20:00.172 "is_configured": true, 00:20:00.172 "data_offset": 2048, 00:20:00.172 "data_size": 63488 00:20:00.172 } 00:20:00.172 ] 00:20:00.172 }' 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.172 [2024-11-20 07:16:57.357490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:00.172 [2024-11-20 07:16:57.405695] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:00.172 [2024-11-20 07:16:57.405782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.172 [2024-11-20 07:16:57.405808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:00.172 [2024-11-20 07:16:57.405822] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.172 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.430 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.430 "name": "raid_bdev1", 00:20:00.430 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:20:00.430 "strip_size_kb": 64, 00:20:00.430 "state": "online", 00:20:00.430 "raid_level": "raid5f", 00:20:00.430 "superblock": true, 00:20:00.430 "num_base_bdevs": 3, 00:20:00.430 "num_base_bdevs_discovered": 2, 00:20:00.430 "num_base_bdevs_operational": 2, 00:20:00.430 "base_bdevs_list": [ 00:20:00.430 { 00:20:00.430 "name": null, 00:20:00.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.430 "is_configured": false, 00:20:00.430 "data_offset": 0, 00:20:00.430 "data_size": 63488 00:20:00.430 }, 00:20:00.430 { 00:20:00.430 "name": "BaseBdev2", 00:20:00.430 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:20:00.430 "is_configured": true, 00:20:00.430 "data_offset": 2048, 00:20:00.430 "data_size": 63488 00:20:00.430 }, 00:20:00.430 { 00:20:00.430 "name": "BaseBdev3", 00:20:00.430 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:20:00.430 "is_configured": true, 00:20:00.430 "data_offset": 2048, 00:20:00.430 "data_size": 63488 00:20:00.430 } 00:20:00.430 ] 00:20:00.430 }' 00:20:00.430 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.430 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.689 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:00.689 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.689 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.689 [2024-11-20 07:16:57.960870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:00.689 [2024-11-20 07:16:57.960999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:00.689 [2024-11-20 07:16:57.961030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:20:00.689 [2024-11-20 07:16:57.961051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:00.689 [2024-11-20 07:16:57.961651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:00.689 [2024-11-20 07:16:57.961696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:00.689 [2024-11-20 07:16:57.961816] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:00.689 [2024-11-20 07:16:57.961841] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:00.689 [2024-11-20 07:16:57.961855] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:00.689 [2024-11-20 07:16:57.961909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:00.689 [2024-11-20 07:16:57.976321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:20:00.689 spare 00:20:00.689 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.689 07:16:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:00.689 [2024-11-20 07:16:57.983477] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:02.061 07:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.061 07:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.061 07:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.061 07:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.061 07:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.061 07:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.061 07:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.061 07:16:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.061 07:16:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.061 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.061 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.061 "name": "raid_bdev1", 00:20:02.061 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:20:02.061 "strip_size_kb": 64, 00:20:02.061 "state": "online", 00:20:02.061 "raid_level": "raid5f", 00:20:02.061 "superblock": true, 00:20:02.061 "num_base_bdevs": 3, 00:20:02.061 "num_base_bdevs_discovered": 3, 00:20:02.061 "num_base_bdevs_operational": 3, 00:20:02.061 "process": { 00:20:02.061 "type": "rebuild", 00:20:02.061 "target": "spare", 00:20:02.061 "progress": { 00:20:02.061 "blocks": 18432, 00:20:02.061 "percent": 14 00:20:02.061 } 00:20:02.061 }, 00:20:02.061 "base_bdevs_list": [ 00:20:02.061 { 00:20:02.061 "name": "spare", 00:20:02.061 "uuid": "ba8e8a61-5b02-58b5-a9ed-d7922b01b746", 00:20:02.061 "is_configured": true, 00:20:02.062 "data_offset": 2048, 00:20:02.062 "data_size": 63488 00:20:02.062 }, 00:20:02.062 { 00:20:02.062 "name": "BaseBdev2", 00:20:02.062 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:20:02.062 "is_configured": true, 00:20:02.062 "data_offset": 2048, 00:20:02.062 "data_size": 63488 00:20:02.062 }, 00:20:02.062 { 00:20:02.062 "name": "BaseBdev3", 00:20:02.062 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:20:02.062 "is_configured": true, 00:20:02.062 "data_offset": 2048, 00:20:02.062 "data_size": 63488 00:20:02.062 } 00:20:02.062 ] 00:20:02.062 }' 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.062 [2024-11-20 07:16:59.149368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:02.062 [2024-11-20 07:16:59.198886] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:02.062 [2024-11-20 07:16:59.198988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.062 [2024-11-20 07:16:59.199018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:02.062 [2024-11-20 07:16:59.199030] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.062 "name": "raid_bdev1", 00:20:02.062 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:20:02.062 "strip_size_kb": 64, 00:20:02.062 "state": "online", 00:20:02.062 "raid_level": "raid5f", 00:20:02.062 "superblock": true, 00:20:02.062 "num_base_bdevs": 3, 00:20:02.062 "num_base_bdevs_discovered": 2, 00:20:02.062 "num_base_bdevs_operational": 2, 00:20:02.062 "base_bdevs_list": [ 00:20:02.062 { 00:20:02.062 "name": null, 00:20:02.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.062 "is_configured": false, 00:20:02.062 "data_offset": 0, 00:20:02.062 "data_size": 63488 00:20:02.062 }, 00:20:02.062 { 00:20:02.062 "name": "BaseBdev2", 00:20:02.062 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:20:02.062 "is_configured": true, 00:20:02.062 "data_offset": 2048, 00:20:02.062 "data_size": 63488 00:20:02.062 }, 00:20:02.062 { 00:20:02.062 "name": "BaseBdev3", 00:20:02.062 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:20:02.062 "is_configured": true, 00:20:02.062 "data_offset": 2048, 00:20:02.062 "data_size": 63488 00:20:02.062 } 00:20:02.062 ] 00:20:02.062 }' 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.062 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.627 "name": "raid_bdev1", 00:20:02.627 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:20:02.627 "strip_size_kb": 64, 00:20:02.627 "state": "online", 00:20:02.627 "raid_level": "raid5f", 00:20:02.627 "superblock": true, 00:20:02.627 "num_base_bdevs": 3, 00:20:02.627 "num_base_bdevs_discovered": 2, 00:20:02.627 "num_base_bdevs_operational": 2, 00:20:02.627 "base_bdevs_list": [ 00:20:02.627 { 00:20:02.627 "name": null, 00:20:02.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.627 "is_configured": false, 00:20:02.627 "data_offset": 0, 00:20:02.627 "data_size": 63488 00:20:02.627 }, 00:20:02.627 { 00:20:02.627 "name": "BaseBdev2", 00:20:02.627 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:20:02.627 "is_configured": true, 00:20:02.627 "data_offset": 2048, 00:20:02.627 "data_size": 63488 00:20:02.627 }, 00:20:02.627 { 00:20:02.627 "name": "BaseBdev3", 00:20:02.627 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:20:02.627 "is_configured": true, 00:20:02.627 "data_offset": 2048, 00:20:02.627 "data_size": 63488 00:20:02.627 } 00:20:02.627 ] 00:20:02.627 }' 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.627 [2024-11-20 07:16:59.939039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:02.627 [2024-11-20 07:16:59.939106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.627 [2024-11-20 07:16:59.939142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:02.627 [2024-11-20 07:16:59.939158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.627 [2024-11-20 07:16:59.939756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.627 [2024-11-20 07:16:59.939795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:02.627 [2024-11-20 07:16:59.939929] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:02.627 [2024-11-20 07:16:59.939958] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:02.627 [2024-11-20 07:16:59.939988] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:02.627 [2024-11-20 07:16:59.940001] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:02.627 BaseBdev1 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.627 07:16:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.002 07:17:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.002 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.002 "name": "raid_bdev1", 00:20:04.002 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:20:04.002 "strip_size_kb": 64, 00:20:04.002 "state": "online", 00:20:04.002 "raid_level": "raid5f", 00:20:04.002 "superblock": true, 00:20:04.002 "num_base_bdevs": 3, 00:20:04.002 "num_base_bdevs_discovered": 2, 00:20:04.002 "num_base_bdevs_operational": 2, 00:20:04.002 "base_bdevs_list": [ 00:20:04.002 { 00:20:04.002 "name": null, 00:20:04.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.002 "is_configured": false, 00:20:04.002 "data_offset": 0, 00:20:04.002 "data_size": 63488 00:20:04.002 }, 00:20:04.002 { 00:20:04.002 "name": "BaseBdev2", 00:20:04.002 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:20:04.002 "is_configured": true, 00:20:04.002 "data_offset": 2048, 00:20:04.002 "data_size": 63488 00:20:04.002 }, 00:20:04.002 { 00:20:04.002 "name": "BaseBdev3", 00:20:04.002 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:20:04.002 "is_configured": true, 00:20:04.002 "data_offset": 2048, 00:20:04.002 "data_size": 63488 00:20:04.002 } 00:20:04.002 ] 00:20:04.002 }' 00:20:04.002 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.002 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.260 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:04.260 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.260 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:04.260 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:04.260 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.260 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.260 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.260 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.260 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.261 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.261 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.261 "name": "raid_bdev1", 00:20:04.261 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:20:04.261 "strip_size_kb": 64, 00:20:04.261 "state": "online", 00:20:04.261 "raid_level": "raid5f", 00:20:04.261 "superblock": true, 00:20:04.261 "num_base_bdevs": 3, 00:20:04.261 "num_base_bdevs_discovered": 2, 00:20:04.261 "num_base_bdevs_operational": 2, 00:20:04.261 "base_bdevs_list": [ 00:20:04.261 { 00:20:04.261 "name": null, 00:20:04.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.261 "is_configured": false, 00:20:04.261 "data_offset": 0, 00:20:04.261 "data_size": 63488 00:20:04.261 }, 00:20:04.261 { 00:20:04.261 "name": "BaseBdev2", 00:20:04.261 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:20:04.261 "is_configured": true, 00:20:04.261 "data_offset": 2048, 00:20:04.261 "data_size": 63488 00:20:04.261 }, 00:20:04.261 { 00:20:04.261 "name": "BaseBdev3", 00:20:04.261 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:20:04.261 "is_configured": true, 00:20:04.261 "data_offset": 2048, 00:20:04.261 "data_size": 63488 00:20:04.261 } 00:20:04.261 ] 00:20:04.261 }' 00:20:04.261 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:04.519 [2024-11-20 07:17:01.639681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.519 [2024-11-20 07:17:01.639903] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:04.519 [2024-11-20 07:17:01.639929] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:04.519 request: 00:20:04.519 { 00:20:04.519 "base_bdev": "BaseBdev1", 00:20:04.519 "raid_bdev": "raid_bdev1", 00:20:04.519 "method": "bdev_raid_add_base_bdev", 00:20:04.519 "req_id": 1 00:20:04.519 } 00:20:04.519 Got JSON-RPC error response 00:20:04.519 response: 00:20:04.519 { 00:20:04.519 "code": -22, 00:20:04.519 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:04.519 } 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:04.519 07:17:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.453 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:05.453 "name": "raid_bdev1", 00:20:05.453 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:20:05.453 "strip_size_kb": 64, 00:20:05.453 "state": "online", 00:20:05.453 "raid_level": "raid5f", 00:20:05.453 "superblock": true, 00:20:05.453 "num_base_bdevs": 3, 00:20:05.453 "num_base_bdevs_discovered": 2, 00:20:05.453 "num_base_bdevs_operational": 2, 00:20:05.453 "base_bdevs_list": [ 00:20:05.453 { 00:20:05.453 "name": null, 00:20:05.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:05.453 "is_configured": false, 00:20:05.453 "data_offset": 0, 00:20:05.453 "data_size": 63488 00:20:05.453 }, 00:20:05.453 { 00:20:05.453 "name": "BaseBdev2", 00:20:05.453 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:20:05.453 "is_configured": true, 00:20:05.453 "data_offset": 2048, 00:20:05.453 "data_size": 63488 00:20:05.453 }, 00:20:05.453 { 00:20:05.453 "name": "BaseBdev3", 00:20:05.453 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:20:05.453 "is_configured": true, 00:20:05.453 "data_offset": 2048, 00:20:05.453 "data_size": 63488 00:20:05.453 } 00:20:05.453 ] 00:20:05.453 }' 00:20:05.454 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:05.454 07:17:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.018 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.019 "name": "raid_bdev1", 00:20:06.019 "uuid": "b4e617ae-b603-439f-b317-4d64c7e794e4", 00:20:06.019 "strip_size_kb": 64, 00:20:06.019 "state": "online", 00:20:06.019 "raid_level": "raid5f", 00:20:06.019 "superblock": true, 00:20:06.019 "num_base_bdevs": 3, 00:20:06.019 "num_base_bdevs_discovered": 2, 00:20:06.019 "num_base_bdevs_operational": 2, 00:20:06.019 "base_bdevs_list": [ 00:20:06.019 { 00:20:06.019 "name": null, 00:20:06.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.019 "is_configured": false, 00:20:06.019 "data_offset": 0, 00:20:06.019 "data_size": 63488 00:20:06.019 }, 00:20:06.019 { 00:20:06.019 "name": "BaseBdev2", 00:20:06.019 "uuid": "0055dcba-a268-51c8-9215-62b90ee1e7f4", 00:20:06.019 "is_configured": true, 00:20:06.019 "data_offset": 2048, 00:20:06.019 "data_size": 63488 00:20:06.019 }, 00:20:06.019 { 00:20:06.019 "name": "BaseBdev3", 00:20:06.019 "uuid": "f163e439-ca96-5054-80cd-d77d63d67ea8", 00:20:06.019 "is_configured": true, 00:20:06.019 "data_offset": 2048, 00:20:06.019 "data_size": 63488 00:20:06.019 } 00:20:06.019 ] 00:20:06.019 }' 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82316 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82316 ']' 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82316 00:20:06.019 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:06.277 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.277 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82316 00:20:06.277 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.277 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.277 killing process with pid 82316 00:20:06.277 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82316' 00:20:06.277 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82316 00:20:06.277 Received shutdown signal, test time was about 60.000000 seconds 00:20:06.277 00:20:06.277 Latency(us) 00:20:06.277 [2024-11-20T07:17:03.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.277 [2024-11-20T07:17:03.597Z] =================================================================================================================== 00:20:06.277 [2024-11-20T07:17:03.597Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.277 [2024-11-20 07:17:03.367492] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.277 07:17:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82316 00:20:06.277 [2024-11-20 07:17:03.367671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.277 [2024-11-20 07:17:03.367773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.277 [2024-11-20 07:17:03.367796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:06.535 [2024-11-20 07:17:03.716264] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:07.468 07:17:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:20:07.468 00:20:07.468 real 0m24.931s 00:20:07.468 user 0m33.257s 00:20:07.468 sys 0m2.685s 00:20:07.468 07:17:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.468 ************************************ 00:20:07.468 END TEST raid5f_rebuild_test_sb 00:20:07.468 07:17:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:07.469 ************************************ 00:20:07.469 07:17:04 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:20:07.469 07:17:04 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:20:07.469 07:17:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:07.469 07:17:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.469 07:17:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:07.469 ************************************ 00:20:07.469 START TEST raid5f_state_function_test 00:20:07.469 ************************************ 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83075 00:20:07.469 Process raid pid: 83075 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83075' 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83075 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83075 ']' 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.469 07:17:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:07.726 [2024-11-20 07:17:04.856562] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:20:07.726 [2024-11-20 07:17:04.856739] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.726 [2024-11-20 07:17:05.038139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.986 [2024-11-20 07:17:05.160916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.245 [2024-11-20 07:17:05.359460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.245 [2024-11-20 07:17:05.359535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.812 [2024-11-20 07:17:05.881595] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:08.812 [2024-11-20 07:17:05.881678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:08.812 [2024-11-20 07:17:05.881696] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:08.812 [2024-11-20 07:17:05.881713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:08.812 [2024-11-20 07:17:05.881723] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:08.812 [2024-11-20 07:17:05.881738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:08.812 [2024-11-20 07:17:05.881747] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:08.812 [2024-11-20 07:17:05.881761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.812 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:08.812 "name": "Existed_Raid", 00:20:08.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.812 "strip_size_kb": 64, 00:20:08.812 "state": "configuring", 00:20:08.812 "raid_level": "raid5f", 00:20:08.812 "superblock": false, 00:20:08.812 "num_base_bdevs": 4, 00:20:08.812 "num_base_bdevs_discovered": 0, 00:20:08.812 "num_base_bdevs_operational": 4, 00:20:08.812 "base_bdevs_list": [ 00:20:08.812 { 00:20:08.812 "name": "BaseBdev1", 00:20:08.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.812 "is_configured": false, 00:20:08.812 "data_offset": 0, 00:20:08.812 "data_size": 0 00:20:08.812 }, 00:20:08.812 { 00:20:08.812 "name": "BaseBdev2", 00:20:08.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.812 "is_configured": false, 00:20:08.812 "data_offset": 0, 00:20:08.812 "data_size": 0 00:20:08.812 }, 00:20:08.813 { 00:20:08.813 "name": "BaseBdev3", 00:20:08.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.813 "is_configured": false, 00:20:08.813 "data_offset": 0, 00:20:08.813 "data_size": 0 00:20:08.813 }, 00:20:08.813 { 00:20:08.813 "name": "BaseBdev4", 00:20:08.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.813 "is_configured": false, 00:20:08.813 "data_offset": 0, 00:20:08.813 "data_size": 0 00:20:08.813 } 00:20:08.813 ] 00:20:08.813 }' 00:20:08.813 07:17:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:08.813 07:17:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.070 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.330 [2024-11-20 07:17:06.393656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:09.330 [2024-11-20 07:17:06.393733] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.330 [2024-11-20 07:17:06.401635] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:09.330 [2024-11-20 07:17:06.401720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:09.330 [2024-11-20 07:17:06.401733] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:09.330 [2024-11-20 07:17:06.401750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:09.330 [2024-11-20 07:17:06.401774] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:09.330 [2024-11-20 07:17:06.401789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:09.330 [2024-11-20 07:17:06.401798] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:09.330 [2024-11-20 07:17:06.401812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.330 [2024-11-20 07:17:06.448615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:09.330 BaseBdev1 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.330 [ 00:20:09.330 { 00:20:09.330 "name": "BaseBdev1", 00:20:09.330 "aliases": [ 00:20:09.330 "45a36c41-2ff4-4e2c-9574-067ba4aaf2fe" 00:20:09.330 ], 00:20:09.330 "product_name": "Malloc disk", 00:20:09.330 "block_size": 512, 00:20:09.330 "num_blocks": 65536, 00:20:09.330 "uuid": "45a36c41-2ff4-4e2c-9574-067ba4aaf2fe", 00:20:09.330 "assigned_rate_limits": { 00:20:09.330 "rw_ios_per_sec": 0, 00:20:09.330 "rw_mbytes_per_sec": 0, 00:20:09.330 "r_mbytes_per_sec": 0, 00:20:09.330 "w_mbytes_per_sec": 0 00:20:09.330 }, 00:20:09.330 "claimed": true, 00:20:09.330 "claim_type": "exclusive_write", 00:20:09.330 "zoned": false, 00:20:09.330 "supported_io_types": { 00:20:09.330 "read": true, 00:20:09.330 "write": true, 00:20:09.330 "unmap": true, 00:20:09.330 "flush": true, 00:20:09.330 "reset": true, 00:20:09.330 "nvme_admin": false, 00:20:09.330 "nvme_io": false, 00:20:09.330 "nvme_io_md": false, 00:20:09.330 "write_zeroes": true, 00:20:09.330 "zcopy": true, 00:20:09.330 "get_zone_info": false, 00:20:09.330 "zone_management": false, 00:20:09.330 "zone_append": false, 00:20:09.330 "compare": false, 00:20:09.330 "compare_and_write": false, 00:20:09.330 "abort": true, 00:20:09.330 "seek_hole": false, 00:20:09.330 "seek_data": false, 00:20:09.330 "copy": true, 00:20:09.330 "nvme_iov_md": false 00:20:09.330 }, 00:20:09.330 "memory_domains": [ 00:20:09.330 { 00:20:09.330 "dma_device_id": "system", 00:20:09.330 "dma_device_type": 1 00:20:09.330 }, 00:20:09.330 { 00:20:09.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.330 "dma_device_type": 2 00:20:09.330 } 00:20:09.330 ], 00:20:09.330 "driver_specific": {} 00:20:09.330 } 00:20:09.330 ] 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.330 "name": "Existed_Raid", 00:20:09.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.330 "strip_size_kb": 64, 00:20:09.330 "state": "configuring", 00:20:09.330 "raid_level": "raid5f", 00:20:09.330 "superblock": false, 00:20:09.330 "num_base_bdevs": 4, 00:20:09.330 "num_base_bdevs_discovered": 1, 00:20:09.330 "num_base_bdevs_operational": 4, 00:20:09.330 "base_bdevs_list": [ 00:20:09.330 { 00:20:09.330 "name": "BaseBdev1", 00:20:09.330 "uuid": "45a36c41-2ff4-4e2c-9574-067ba4aaf2fe", 00:20:09.330 "is_configured": true, 00:20:09.330 "data_offset": 0, 00:20:09.330 "data_size": 65536 00:20:09.330 }, 00:20:09.330 { 00:20:09.330 "name": "BaseBdev2", 00:20:09.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.330 "is_configured": false, 00:20:09.330 "data_offset": 0, 00:20:09.330 "data_size": 0 00:20:09.330 }, 00:20:09.330 { 00:20:09.330 "name": "BaseBdev3", 00:20:09.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.330 "is_configured": false, 00:20:09.330 "data_offset": 0, 00:20:09.330 "data_size": 0 00:20:09.330 }, 00:20:09.330 { 00:20:09.330 "name": "BaseBdev4", 00:20:09.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.330 "is_configured": false, 00:20:09.330 "data_offset": 0, 00:20:09.330 "data_size": 0 00:20:09.330 } 00:20:09.330 ] 00:20:09.330 }' 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.330 07:17:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.903 [2024-11-20 07:17:07.008847] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:09.903 [2024-11-20 07:17:07.008963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.903 [2024-11-20 07:17:07.016984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:09.903 [2024-11-20 07:17:07.019631] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:09.903 [2024-11-20 07:17:07.019690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:09.903 [2024-11-20 07:17:07.019707] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:09.903 [2024-11-20 07:17:07.019725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:09.903 [2024-11-20 07:17:07.019736] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:09.903 [2024-11-20 07:17:07.019750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.903 "name": "Existed_Raid", 00:20:09.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.903 "strip_size_kb": 64, 00:20:09.903 "state": "configuring", 00:20:09.903 "raid_level": "raid5f", 00:20:09.903 "superblock": false, 00:20:09.903 "num_base_bdevs": 4, 00:20:09.903 "num_base_bdevs_discovered": 1, 00:20:09.903 "num_base_bdevs_operational": 4, 00:20:09.903 "base_bdevs_list": [ 00:20:09.903 { 00:20:09.903 "name": "BaseBdev1", 00:20:09.903 "uuid": "45a36c41-2ff4-4e2c-9574-067ba4aaf2fe", 00:20:09.903 "is_configured": true, 00:20:09.903 "data_offset": 0, 00:20:09.903 "data_size": 65536 00:20:09.903 }, 00:20:09.903 { 00:20:09.903 "name": "BaseBdev2", 00:20:09.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.903 "is_configured": false, 00:20:09.903 "data_offset": 0, 00:20:09.903 "data_size": 0 00:20:09.903 }, 00:20:09.903 { 00:20:09.903 "name": "BaseBdev3", 00:20:09.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.903 "is_configured": false, 00:20:09.903 "data_offset": 0, 00:20:09.903 "data_size": 0 00:20:09.903 }, 00:20:09.903 { 00:20:09.903 "name": "BaseBdev4", 00:20:09.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.903 "is_configured": false, 00:20:09.903 "data_offset": 0, 00:20:09.903 "data_size": 0 00:20:09.903 } 00:20:09.903 ] 00:20:09.903 }' 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.903 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.471 [2024-11-20 07:17:07.621812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:10.471 BaseBdev2 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.471 [ 00:20:10.471 { 00:20:10.471 "name": "BaseBdev2", 00:20:10.471 "aliases": [ 00:20:10.471 "74cad3d5-573f-46aa-8707-35bb39778a7f" 00:20:10.471 ], 00:20:10.471 "product_name": "Malloc disk", 00:20:10.471 "block_size": 512, 00:20:10.471 "num_blocks": 65536, 00:20:10.471 "uuid": "74cad3d5-573f-46aa-8707-35bb39778a7f", 00:20:10.471 "assigned_rate_limits": { 00:20:10.471 "rw_ios_per_sec": 0, 00:20:10.471 "rw_mbytes_per_sec": 0, 00:20:10.471 "r_mbytes_per_sec": 0, 00:20:10.471 "w_mbytes_per_sec": 0 00:20:10.471 }, 00:20:10.471 "claimed": true, 00:20:10.471 "claim_type": "exclusive_write", 00:20:10.471 "zoned": false, 00:20:10.471 "supported_io_types": { 00:20:10.471 "read": true, 00:20:10.471 "write": true, 00:20:10.471 "unmap": true, 00:20:10.471 "flush": true, 00:20:10.471 "reset": true, 00:20:10.471 "nvme_admin": false, 00:20:10.471 "nvme_io": false, 00:20:10.471 "nvme_io_md": false, 00:20:10.471 "write_zeroes": true, 00:20:10.471 "zcopy": true, 00:20:10.471 "get_zone_info": false, 00:20:10.471 "zone_management": false, 00:20:10.471 "zone_append": false, 00:20:10.471 "compare": false, 00:20:10.471 "compare_and_write": false, 00:20:10.471 "abort": true, 00:20:10.471 "seek_hole": false, 00:20:10.471 "seek_data": false, 00:20:10.471 "copy": true, 00:20:10.471 "nvme_iov_md": false 00:20:10.471 }, 00:20:10.471 "memory_domains": [ 00:20:10.471 { 00:20:10.471 "dma_device_id": "system", 00:20:10.471 "dma_device_type": 1 00:20:10.471 }, 00:20:10.471 { 00:20:10.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.471 "dma_device_type": 2 00:20:10.471 } 00:20:10.471 ], 00:20:10.471 "driver_specific": {} 00:20:10.471 } 00:20:10.471 ] 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.471 "name": "Existed_Raid", 00:20:10.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.471 "strip_size_kb": 64, 00:20:10.471 "state": "configuring", 00:20:10.471 "raid_level": "raid5f", 00:20:10.471 "superblock": false, 00:20:10.471 "num_base_bdevs": 4, 00:20:10.471 "num_base_bdevs_discovered": 2, 00:20:10.471 "num_base_bdevs_operational": 4, 00:20:10.471 "base_bdevs_list": [ 00:20:10.471 { 00:20:10.471 "name": "BaseBdev1", 00:20:10.471 "uuid": "45a36c41-2ff4-4e2c-9574-067ba4aaf2fe", 00:20:10.471 "is_configured": true, 00:20:10.471 "data_offset": 0, 00:20:10.471 "data_size": 65536 00:20:10.471 }, 00:20:10.471 { 00:20:10.471 "name": "BaseBdev2", 00:20:10.471 "uuid": "74cad3d5-573f-46aa-8707-35bb39778a7f", 00:20:10.471 "is_configured": true, 00:20:10.471 "data_offset": 0, 00:20:10.471 "data_size": 65536 00:20:10.471 }, 00:20:10.471 { 00:20:10.471 "name": "BaseBdev3", 00:20:10.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.471 "is_configured": false, 00:20:10.471 "data_offset": 0, 00:20:10.471 "data_size": 0 00:20:10.471 }, 00:20:10.471 { 00:20:10.471 "name": "BaseBdev4", 00:20:10.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.471 "is_configured": false, 00:20:10.471 "data_offset": 0, 00:20:10.471 "data_size": 0 00:20:10.471 } 00:20:10.471 ] 00:20:10.471 }' 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.471 07:17:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.039 [2024-11-20 07:17:08.235910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:11.039 BaseBdev3 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.039 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.039 [ 00:20:11.039 { 00:20:11.039 "name": "BaseBdev3", 00:20:11.039 "aliases": [ 00:20:11.039 "cdfa1060-61a5-4988-96a6-0c39e70cda8e" 00:20:11.039 ], 00:20:11.039 "product_name": "Malloc disk", 00:20:11.039 "block_size": 512, 00:20:11.039 "num_blocks": 65536, 00:20:11.039 "uuid": "cdfa1060-61a5-4988-96a6-0c39e70cda8e", 00:20:11.039 "assigned_rate_limits": { 00:20:11.039 "rw_ios_per_sec": 0, 00:20:11.039 "rw_mbytes_per_sec": 0, 00:20:11.039 "r_mbytes_per_sec": 0, 00:20:11.039 "w_mbytes_per_sec": 0 00:20:11.039 }, 00:20:11.039 "claimed": true, 00:20:11.039 "claim_type": "exclusive_write", 00:20:11.039 "zoned": false, 00:20:11.039 "supported_io_types": { 00:20:11.039 "read": true, 00:20:11.039 "write": true, 00:20:11.039 "unmap": true, 00:20:11.039 "flush": true, 00:20:11.039 "reset": true, 00:20:11.039 "nvme_admin": false, 00:20:11.039 "nvme_io": false, 00:20:11.039 "nvme_io_md": false, 00:20:11.039 "write_zeroes": true, 00:20:11.039 "zcopy": true, 00:20:11.039 "get_zone_info": false, 00:20:11.039 "zone_management": false, 00:20:11.039 "zone_append": false, 00:20:11.039 "compare": false, 00:20:11.039 "compare_and_write": false, 00:20:11.039 "abort": true, 00:20:11.039 "seek_hole": false, 00:20:11.039 "seek_data": false, 00:20:11.039 "copy": true, 00:20:11.039 "nvme_iov_md": false 00:20:11.039 }, 00:20:11.039 "memory_domains": [ 00:20:11.040 { 00:20:11.040 "dma_device_id": "system", 00:20:11.040 "dma_device_type": 1 00:20:11.040 }, 00:20:11.040 { 00:20:11.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.040 "dma_device_type": 2 00:20:11.040 } 00:20:11.040 ], 00:20:11.040 "driver_specific": {} 00:20:11.040 } 00:20:11.040 ] 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.040 "name": "Existed_Raid", 00:20:11.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.040 "strip_size_kb": 64, 00:20:11.040 "state": "configuring", 00:20:11.040 "raid_level": "raid5f", 00:20:11.040 "superblock": false, 00:20:11.040 "num_base_bdevs": 4, 00:20:11.040 "num_base_bdevs_discovered": 3, 00:20:11.040 "num_base_bdevs_operational": 4, 00:20:11.040 "base_bdevs_list": [ 00:20:11.040 { 00:20:11.040 "name": "BaseBdev1", 00:20:11.040 "uuid": "45a36c41-2ff4-4e2c-9574-067ba4aaf2fe", 00:20:11.040 "is_configured": true, 00:20:11.040 "data_offset": 0, 00:20:11.040 "data_size": 65536 00:20:11.040 }, 00:20:11.040 { 00:20:11.040 "name": "BaseBdev2", 00:20:11.040 "uuid": "74cad3d5-573f-46aa-8707-35bb39778a7f", 00:20:11.040 "is_configured": true, 00:20:11.040 "data_offset": 0, 00:20:11.040 "data_size": 65536 00:20:11.040 }, 00:20:11.040 { 00:20:11.040 "name": "BaseBdev3", 00:20:11.040 "uuid": "cdfa1060-61a5-4988-96a6-0c39e70cda8e", 00:20:11.040 "is_configured": true, 00:20:11.040 "data_offset": 0, 00:20:11.040 "data_size": 65536 00:20:11.040 }, 00:20:11.040 { 00:20:11.040 "name": "BaseBdev4", 00:20:11.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.040 "is_configured": false, 00:20:11.040 "data_offset": 0, 00:20:11.040 "data_size": 0 00:20:11.040 } 00:20:11.040 ] 00:20:11.040 }' 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.040 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.607 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:11.607 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.607 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.607 [2024-11-20 07:17:08.838317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:11.607 [2024-11-20 07:17:08.838482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:11.607 [2024-11-20 07:17:08.838500] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:11.607 [2024-11-20 07:17:08.838842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:11.607 [2024-11-20 07:17:08.846775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:11.607 [2024-11-20 07:17:08.846846] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:11.607 [2024-11-20 07:17:08.847233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:11.607 BaseBdev4 00:20:11.607 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.607 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:20:11.607 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:11.607 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:11.607 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:11.607 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:11.607 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.608 [ 00:20:11.608 { 00:20:11.608 "name": "BaseBdev4", 00:20:11.608 "aliases": [ 00:20:11.608 "79877bed-b4b0-412b-a68e-2d72a019ea41" 00:20:11.608 ], 00:20:11.608 "product_name": "Malloc disk", 00:20:11.608 "block_size": 512, 00:20:11.608 "num_blocks": 65536, 00:20:11.608 "uuid": "79877bed-b4b0-412b-a68e-2d72a019ea41", 00:20:11.608 "assigned_rate_limits": { 00:20:11.608 "rw_ios_per_sec": 0, 00:20:11.608 "rw_mbytes_per_sec": 0, 00:20:11.608 "r_mbytes_per_sec": 0, 00:20:11.608 "w_mbytes_per_sec": 0 00:20:11.608 }, 00:20:11.608 "claimed": true, 00:20:11.608 "claim_type": "exclusive_write", 00:20:11.608 "zoned": false, 00:20:11.608 "supported_io_types": { 00:20:11.608 "read": true, 00:20:11.608 "write": true, 00:20:11.608 "unmap": true, 00:20:11.608 "flush": true, 00:20:11.608 "reset": true, 00:20:11.608 "nvme_admin": false, 00:20:11.608 "nvme_io": false, 00:20:11.608 "nvme_io_md": false, 00:20:11.608 "write_zeroes": true, 00:20:11.608 "zcopy": true, 00:20:11.608 "get_zone_info": false, 00:20:11.608 "zone_management": false, 00:20:11.608 "zone_append": false, 00:20:11.608 "compare": false, 00:20:11.608 "compare_and_write": false, 00:20:11.608 "abort": true, 00:20:11.608 "seek_hole": false, 00:20:11.608 "seek_data": false, 00:20:11.608 "copy": true, 00:20:11.608 "nvme_iov_md": false 00:20:11.608 }, 00:20:11.608 "memory_domains": [ 00:20:11.608 { 00:20:11.608 "dma_device_id": "system", 00:20:11.608 "dma_device_type": 1 00:20:11.608 }, 00:20:11.608 { 00:20:11.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.608 "dma_device_type": 2 00:20:11.608 } 00:20:11.608 ], 00:20:11.608 "driver_specific": {} 00:20:11.608 } 00:20:11.608 ] 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:11.608 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.866 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.866 "name": "Existed_Raid", 00:20:11.866 "uuid": "0b55feab-e16b-425a-99cb-949c717dc6c9", 00:20:11.866 "strip_size_kb": 64, 00:20:11.866 "state": "online", 00:20:11.866 "raid_level": "raid5f", 00:20:11.866 "superblock": false, 00:20:11.866 "num_base_bdevs": 4, 00:20:11.866 "num_base_bdevs_discovered": 4, 00:20:11.866 "num_base_bdevs_operational": 4, 00:20:11.866 "base_bdevs_list": [ 00:20:11.866 { 00:20:11.866 "name": "BaseBdev1", 00:20:11.866 "uuid": "45a36c41-2ff4-4e2c-9574-067ba4aaf2fe", 00:20:11.866 "is_configured": true, 00:20:11.866 "data_offset": 0, 00:20:11.866 "data_size": 65536 00:20:11.866 }, 00:20:11.866 { 00:20:11.866 "name": "BaseBdev2", 00:20:11.866 "uuid": "74cad3d5-573f-46aa-8707-35bb39778a7f", 00:20:11.866 "is_configured": true, 00:20:11.866 "data_offset": 0, 00:20:11.866 "data_size": 65536 00:20:11.866 }, 00:20:11.866 { 00:20:11.866 "name": "BaseBdev3", 00:20:11.866 "uuid": "cdfa1060-61a5-4988-96a6-0c39e70cda8e", 00:20:11.866 "is_configured": true, 00:20:11.866 "data_offset": 0, 00:20:11.866 "data_size": 65536 00:20:11.866 }, 00:20:11.866 { 00:20:11.866 "name": "BaseBdev4", 00:20:11.866 "uuid": "79877bed-b4b0-412b-a68e-2d72a019ea41", 00:20:11.866 "is_configured": true, 00:20:11.866 "data_offset": 0, 00:20:11.866 "data_size": 65536 00:20:11.866 } 00:20:11.866 ] 00:20:11.866 }' 00:20:11.866 07:17:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.866 07:17:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.125 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:12.125 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:12.125 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:12.125 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:12.125 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:12.125 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:12.125 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:12.125 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.125 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.125 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:12.125 [2024-11-20 07:17:09.423653] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:12.125 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.416 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:12.416 "name": "Existed_Raid", 00:20:12.416 "aliases": [ 00:20:12.416 "0b55feab-e16b-425a-99cb-949c717dc6c9" 00:20:12.416 ], 00:20:12.416 "product_name": "Raid Volume", 00:20:12.416 "block_size": 512, 00:20:12.416 "num_blocks": 196608, 00:20:12.416 "uuid": "0b55feab-e16b-425a-99cb-949c717dc6c9", 00:20:12.416 "assigned_rate_limits": { 00:20:12.416 "rw_ios_per_sec": 0, 00:20:12.416 "rw_mbytes_per_sec": 0, 00:20:12.416 "r_mbytes_per_sec": 0, 00:20:12.416 "w_mbytes_per_sec": 0 00:20:12.416 }, 00:20:12.416 "claimed": false, 00:20:12.416 "zoned": false, 00:20:12.416 "supported_io_types": { 00:20:12.416 "read": true, 00:20:12.416 "write": true, 00:20:12.416 "unmap": false, 00:20:12.416 "flush": false, 00:20:12.416 "reset": true, 00:20:12.416 "nvme_admin": false, 00:20:12.416 "nvme_io": false, 00:20:12.416 "nvme_io_md": false, 00:20:12.416 "write_zeroes": true, 00:20:12.416 "zcopy": false, 00:20:12.416 "get_zone_info": false, 00:20:12.416 "zone_management": false, 00:20:12.416 "zone_append": false, 00:20:12.416 "compare": false, 00:20:12.416 "compare_and_write": false, 00:20:12.416 "abort": false, 00:20:12.416 "seek_hole": false, 00:20:12.416 "seek_data": false, 00:20:12.416 "copy": false, 00:20:12.416 "nvme_iov_md": false 00:20:12.416 }, 00:20:12.416 "driver_specific": { 00:20:12.416 "raid": { 00:20:12.416 "uuid": "0b55feab-e16b-425a-99cb-949c717dc6c9", 00:20:12.416 "strip_size_kb": 64, 00:20:12.416 "state": "online", 00:20:12.416 "raid_level": "raid5f", 00:20:12.416 "superblock": false, 00:20:12.416 "num_base_bdevs": 4, 00:20:12.416 "num_base_bdevs_discovered": 4, 00:20:12.416 "num_base_bdevs_operational": 4, 00:20:12.416 "base_bdevs_list": [ 00:20:12.416 { 00:20:12.416 "name": "BaseBdev1", 00:20:12.416 "uuid": "45a36c41-2ff4-4e2c-9574-067ba4aaf2fe", 00:20:12.416 "is_configured": true, 00:20:12.416 "data_offset": 0, 00:20:12.416 "data_size": 65536 00:20:12.416 }, 00:20:12.416 { 00:20:12.416 "name": "BaseBdev2", 00:20:12.416 "uuid": "74cad3d5-573f-46aa-8707-35bb39778a7f", 00:20:12.416 "is_configured": true, 00:20:12.417 "data_offset": 0, 00:20:12.417 "data_size": 65536 00:20:12.417 }, 00:20:12.417 { 00:20:12.417 "name": "BaseBdev3", 00:20:12.417 "uuid": "cdfa1060-61a5-4988-96a6-0c39e70cda8e", 00:20:12.417 "is_configured": true, 00:20:12.417 "data_offset": 0, 00:20:12.417 "data_size": 65536 00:20:12.417 }, 00:20:12.417 { 00:20:12.417 "name": "BaseBdev4", 00:20:12.417 "uuid": "79877bed-b4b0-412b-a68e-2d72a019ea41", 00:20:12.417 "is_configured": true, 00:20:12.417 "data_offset": 0, 00:20:12.417 "data_size": 65536 00:20:12.417 } 00:20:12.417 ] 00:20:12.417 } 00:20:12.417 } 00:20:12.417 }' 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:12.417 BaseBdev2 00:20:12.417 BaseBdev3 00:20:12.417 BaseBdev4' 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.417 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.695 [2024-11-20 07:17:09.799583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:12.695 "name": "Existed_Raid", 00:20:12.695 "uuid": "0b55feab-e16b-425a-99cb-949c717dc6c9", 00:20:12.695 "strip_size_kb": 64, 00:20:12.695 "state": "online", 00:20:12.695 "raid_level": "raid5f", 00:20:12.695 "superblock": false, 00:20:12.695 "num_base_bdevs": 4, 00:20:12.695 "num_base_bdevs_discovered": 3, 00:20:12.695 "num_base_bdevs_operational": 3, 00:20:12.695 "base_bdevs_list": [ 00:20:12.695 { 00:20:12.695 "name": null, 00:20:12.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.695 "is_configured": false, 00:20:12.695 "data_offset": 0, 00:20:12.695 "data_size": 65536 00:20:12.695 }, 00:20:12.695 { 00:20:12.695 "name": "BaseBdev2", 00:20:12.695 "uuid": "74cad3d5-573f-46aa-8707-35bb39778a7f", 00:20:12.695 "is_configured": true, 00:20:12.695 "data_offset": 0, 00:20:12.695 "data_size": 65536 00:20:12.695 }, 00:20:12.695 { 00:20:12.695 "name": "BaseBdev3", 00:20:12.695 "uuid": "cdfa1060-61a5-4988-96a6-0c39e70cda8e", 00:20:12.695 "is_configured": true, 00:20:12.695 "data_offset": 0, 00:20:12.695 "data_size": 65536 00:20:12.695 }, 00:20:12.695 { 00:20:12.695 "name": "BaseBdev4", 00:20:12.695 "uuid": "79877bed-b4b0-412b-a68e-2d72a019ea41", 00:20:12.695 "is_configured": true, 00:20:12.695 "data_offset": 0, 00:20:12.695 "data_size": 65536 00:20:12.695 } 00:20:12.695 ] 00:20:12.695 }' 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:12.695 07:17:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.261 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:13.261 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.261 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.261 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.261 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.261 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:13.261 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.261 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:13.261 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.261 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:13.261 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.261 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.261 [2024-11-20 07:17:10.521047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:13.261 [2024-11-20 07:17:10.521192] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.518 [2024-11-20 07:17:10.606484] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.518 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.518 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:13.518 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.518 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:13.518 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.518 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.518 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.518 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.518 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.519 [2024-11-20 07:17:10.670543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.519 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.519 [2024-11-20 07:17:10.820125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:13.519 [2024-11-20 07:17:10.820208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.778 07:17:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.778 BaseBdev2 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.778 [ 00:20:13.778 { 00:20:13.778 "name": "BaseBdev2", 00:20:13.778 "aliases": [ 00:20:13.778 "a1961ab6-1ecd-4d76-a57b-55c31e967d0a" 00:20:13.778 ], 00:20:13.778 "product_name": "Malloc disk", 00:20:13.778 "block_size": 512, 00:20:13.778 "num_blocks": 65536, 00:20:13.778 "uuid": "a1961ab6-1ecd-4d76-a57b-55c31e967d0a", 00:20:13.778 "assigned_rate_limits": { 00:20:13.778 "rw_ios_per_sec": 0, 00:20:13.778 "rw_mbytes_per_sec": 0, 00:20:13.778 "r_mbytes_per_sec": 0, 00:20:13.778 "w_mbytes_per_sec": 0 00:20:13.778 }, 00:20:13.778 "claimed": false, 00:20:13.778 "zoned": false, 00:20:13.778 "supported_io_types": { 00:20:13.778 "read": true, 00:20:13.778 "write": true, 00:20:13.778 "unmap": true, 00:20:13.778 "flush": true, 00:20:13.778 "reset": true, 00:20:13.778 "nvme_admin": false, 00:20:13.778 "nvme_io": false, 00:20:13.778 "nvme_io_md": false, 00:20:13.778 "write_zeroes": true, 00:20:13.778 "zcopy": true, 00:20:13.778 "get_zone_info": false, 00:20:13.778 "zone_management": false, 00:20:13.778 "zone_append": false, 00:20:13.778 "compare": false, 00:20:13.778 "compare_and_write": false, 00:20:13.778 "abort": true, 00:20:13.778 "seek_hole": false, 00:20:13.778 "seek_data": false, 00:20:13.778 "copy": true, 00:20:13.778 "nvme_iov_md": false 00:20:13.778 }, 00:20:13.778 "memory_domains": [ 00:20:13.778 { 00:20:13.778 "dma_device_id": "system", 00:20:13.778 "dma_device_type": 1 00:20:13.778 }, 00:20:13.778 { 00:20:13.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:13.778 "dma_device_type": 2 00:20:13.778 } 00:20:13.778 ], 00:20:13.778 "driver_specific": {} 00:20:13.778 } 00:20:13.778 ] 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.778 BaseBdev3 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.778 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.037 [ 00:20:14.037 { 00:20:14.037 "name": "BaseBdev3", 00:20:14.037 "aliases": [ 00:20:14.037 "5c6c72e4-865e-4ab1-a6f6-4b2905051b3a" 00:20:14.037 ], 00:20:14.037 "product_name": "Malloc disk", 00:20:14.037 "block_size": 512, 00:20:14.037 "num_blocks": 65536, 00:20:14.037 "uuid": "5c6c72e4-865e-4ab1-a6f6-4b2905051b3a", 00:20:14.037 "assigned_rate_limits": { 00:20:14.037 "rw_ios_per_sec": 0, 00:20:14.037 "rw_mbytes_per_sec": 0, 00:20:14.037 "r_mbytes_per_sec": 0, 00:20:14.037 "w_mbytes_per_sec": 0 00:20:14.037 }, 00:20:14.037 "claimed": false, 00:20:14.037 "zoned": false, 00:20:14.037 "supported_io_types": { 00:20:14.037 "read": true, 00:20:14.037 "write": true, 00:20:14.037 "unmap": true, 00:20:14.037 "flush": true, 00:20:14.037 "reset": true, 00:20:14.037 "nvme_admin": false, 00:20:14.037 "nvme_io": false, 00:20:14.037 "nvme_io_md": false, 00:20:14.037 "write_zeroes": true, 00:20:14.037 "zcopy": true, 00:20:14.037 "get_zone_info": false, 00:20:14.037 "zone_management": false, 00:20:14.037 "zone_append": false, 00:20:14.037 "compare": false, 00:20:14.037 "compare_and_write": false, 00:20:14.037 "abort": true, 00:20:14.037 "seek_hole": false, 00:20:14.037 "seek_data": false, 00:20:14.037 "copy": true, 00:20:14.037 "nvme_iov_md": false 00:20:14.037 }, 00:20:14.037 "memory_domains": [ 00:20:14.037 { 00:20:14.038 "dma_device_id": "system", 00:20:14.038 "dma_device_type": 1 00:20:14.038 }, 00:20:14.038 { 00:20:14.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.038 "dma_device_type": 2 00:20:14.038 } 00:20:14.038 ], 00:20:14.038 "driver_specific": {} 00:20:14.038 } 00:20:14.038 ] 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.038 BaseBdev4 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.038 [ 00:20:14.038 { 00:20:14.038 "name": "BaseBdev4", 00:20:14.038 "aliases": [ 00:20:14.038 "e066d6c5-16fe-4e67-98f8-0ee933f6bd22" 00:20:14.038 ], 00:20:14.038 "product_name": "Malloc disk", 00:20:14.038 "block_size": 512, 00:20:14.038 "num_blocks": 65536, 00:20:14.038 "uuid": "e066d6c5-16fe-4e67-98f8-0ee933f6bd22", 00:20:14.038 "assigned_rate_limits": { 00:20:14.038 "rw_ios_per_sec": 0, 00:20:14.038 "rw_mbytes_per_sec": 0, 00:20:14.038 "r_mbytes_per_sec": 0, 00:20:14.038 "w_mbytes_per_sec": 0 00:20:14.038 }, 00:20:14.038 "claimed": false, 00:20:14.038 "zoned": false, 00:20:14.038 "supported_io_types": { 00:20:14.038 "read": true, 00:20:14.038 "write": true, 00:20:14.038 "unmap": true, 00:20:14.038 "flush": true, 00:20:14.038 "reset": true, 00:20:14.038 "nvme_admin": false, 00:20:14.038 "nvme_io": false, 00:20:14.038 "nvme_io_md": false, 00:20:14.038 "write_zeroes": true, 00:20:14.038 "zcopy": true, 00:20:14.038 "get_zone_info": false, 00:20:14.038 "zone_management": false, 00:20:14.038 "zone_append": false, 00:20:14.038 "compare": false, 00:20:14.038 "compare_and_write": false, 00:20:14.038 "abort": true, 00:20:14.038 "seek_hole": false, 00:20:14.038 "seek_data": false, 00:20:14.038 "copy": true, 00:20:14.038 "nvme_iov_md": false 00:20:14.038 }, 00:20:14.038 "memory_domains": [ 00:20:14.038 { 00:20:14.038 "dma_device_id": "system", 00:20:14.038 "dma_device_type": 1 00:20:14.038 }, 00:20:14.038 { 00:20:14.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:14.038 "dma_device_type": 2 00:20:14.038 } 00:20:14.038 ], 00:20:14.038 "driver_specific": {} 00:20:14.038 } 00:20:14.038 ] 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.038 [2024-11-20 07:17:11.190726] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:14.038 [2024-11-20 07:17:11.190796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:14.038 [2024-11-20 07:17:11.190835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:14.038 [2024-11-20 07:17:11.193424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:14.038 [2024-11-20 07:17:11.193650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.038 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.038 "name": "Existed_Raid", 00:20:14.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.038 "strip_size_kb": 64, 00:20:14.038 "state": "configuring", 00:20:14.038 "raid_level": "raid5f", 00:20:14.038 "superblock": false, 00:20:14.038 "num_base_bdevs": 4, 00:20:14.039 "num_base_bdevs_discovered": 3, 00:20:14.039 "num_base_bdevs_operational": 4, 00:20:14.039 "base_bdevs_list": [ 00:20:14.039 { 00:20:14.039 "name": "BaseBdev1", 00:20:14.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.039 "is_configured": false, 00:20:14.039 "data_offset": 0, 00:20:14.039 "data_size": 0 00:20:14.039 }, 00:20:14.039 { 00:20:14.039 "name": "BaseBdev2", 00:20:14.039 "uuid": "a1961ab6-1ecd-4d76-a57b-55c31e967d0a", 00:20:14.039 "is_configured": true, 00:20:14.039 "data_offset": 0, 00:20:14.039 "data_size": 65536 00:20:14.039 }, 00:20:14.039 { 00:20:14.039 "name": "BaseBdev3", 00:20:14.039 "uuid": "5c6c72e4-865e-4ab1-a6f6-4b2905051b3a", 00:20:14.039 "is_configured": true, 00:20:14.039 "data_offset": 0, 00:20:14.039 "data_size": 65536 00:20:14.039 }, 00:20:14.039 { 00:20:14.039 "name": "BaseBdev4", 00:20:14.039 "uuid": "e066d6c5-16fe-4e67-98f8-0ee933f6bd22", 00:20:14.039 "is_configured": true, 00:20:14.039 "data_offset": 0, 00:20:14.039 "data_size": 65536 00:20:14.039 } 00:20:14.039 ] 00:20:14.039 }' 00:20:14.039 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.039 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.606 [2024-11-20 07:17:11.734861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.606 "name": "Existed_Raid", 00:20:14.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.606 "strip_size_kb": 64, 00:20:14.606 "state": "configuring", 00:20:14.606 "raid_level": "raid5f", 00:20:14.606 "superblock": false, 00:20:14.606 "num_base_bdevs": 4, 00:20:14.606 "num_base_bdevs_discovered": 2, 00:20:14.606 "num_base_bdevs_operational": 4, 00:20:14.606 "base_bdevs_list": [ 00:20:14.606 { 00:20:14.606 "name": "BaseBdev1", 00:20:14.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:14.606 "is_configured": false, 00:20:14.606 "data_offset": 0, 00:20:14.606 "data_size": 0 00:20:14.606 }, 00:20:14.606 { 00:20:14.606 "name": null, 00:20:14.606 "uuid": "a1961ab6-1ecd-4d76-a57b-55c31e967d0a", 00:20:14.606 "is_configured": false, 00:20:14.606 "data_offset": 0, 00:20:14.606 "data_size": 65536 00:20:14.606 }, 00:20:14.606 { 00:20:14.606 "name": "BaseBdev3", 00:20:14.606 "uuid": "5c6c72e4-865e-4ab1-a6f6-4b2905051b3a", 00:20:14.606 "is_configured": true, 00:20:14.606 "data_offset": 0, 00:20:14.606 "data_size": 65536 00:20:14.606 }, 00:20:14.606 { 00:20:14.606 "name": "BaseBdev4", 00:20:14.606 "uuid": "e066d6c5-16fe-4e67-98f8-0ee933f6bd22", 00:20:14.606 "is_configured": true, 00:20:14.606 "data_offset": 0, 00:20:14.606 "data_size": 65536 00:20:14.606 } 00:20:14.606 ] 00:20:14.606 }' 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.606 07:17:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.171 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:15.171 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.171 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.172 [2024-11-20 07:17:12.342706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:15.172 BaseBdev1 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.172 [ 00:20:15.172 { 00:20:15.172 "name": "BaseBdev1", 00:20:15.172 "aliases": [ 00:20:15.172 "3da620b9-4d04-4306-b7e4-42e7ac630897" 00:20:15.172 ], 00:20:15.172 "product_name": "Malloc disk", 00:20:15.172 "block_size": 512, 00:20:15.172 "num_blocks": 65536, 00:20:15.172 "uuid": "3da620b9-4d04-4306-b7e4-42e7ac630897", 00:20:15.172 "assigned_rate_limits": { 00:20:15.172 "rw_ios_per_sec": 0, 00:20:15.172 "rw_mbytes_per_sec": 0, 00:20:15.172 "r_mbytes_per_sec": 0, 00:20:15.172 "w_mbytes_per_sec": 0 00:20:15.172 }, 00:20:15.172 "claimed": true, 00:20:15.172 "claim_type": "exclusive_write", 00:20:15.172 "zoned": false, 00:20:15.172 "supported_io_types": { 00:20:15.172 "read": true, 00:20:15.172 "write": true, 00:20:15.172 "unmap": true, 00:20:15.172 "flush": true, 00:20:15.172 "reset": true, 00:20:15.172 "nvme_admin": false, 00:20:15.172 "nvme_io": false, 00:20:15.172 "nvme_io_md": false, 00:20:15.172 "write_zeroes": true, 00:20:15.172 "zcopy": true, 00:20:15.172 "get_zone_info": false, 00:20:15.172 "zone_management": false, 00:20:15.172 "zone_append": false, 00:20:15.172 "compare": false, 00:20:15.172 "compare_and_write": false, 00:20:15.172 "abort": true, 00:20:15.172 "seek_hole": false, 00:20:15.172 "seek_data": false, 00:20:15.172 "copy": true, 00:20:15.172 "nvme_iov_md": false 00:20:15.172 }, 00:20:15.172 "memory_domains": [ 00:20:15.172 { 00:20:15.172 "dma_device_id": "system", 00:20:15.172 "dma_device_type": 1 00:20:15.172 }, 00:20:15.172 { 00:20:15.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.172 "dma_device_type": 2 00:20:15.172 } 00:20:15.172 ], 00:20:15.172 "driver_specific": {} 00:20:15.172 } 00:20:15.172 ] 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.172 "name": "Existed_Raid", 00:20:15.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.172 "strip_size_kb": 64, 00:20:15.172 "state": "configuring", 00:20:15.172 "raid_level": "raid5f", 00:20:15.172 "superblock": false, 00:20:15.172 "num_base_bdevs": 4, 00:20:15.172 "num_base_bdevs_discovered": 3, 00:20:15.172 "num_base_bdevs_operational": 4, 00:20:15.172 "base_bdevs_list": [ 00:20:15.172 { 00:20:15.172 "name": "BaseBdev1", 00:20:15.172 "uuid": "3da620b9-4d04-4306-b7e4-42e7ac630897", 00:20:15.172 "is_configured": true, 00:20:15.172 "data_offset": 0, 00:20:15.172 "data_size": 65536 00:20:15.172 }, 00:20:15.172 { 00:20:15.172 "name": null, 00:20:15.172 "uuid": "a1961ab6-1ecd-4d76-a57b-55c31e967d0a", 00:20:15.172 "is_configured": false, 00:20:15.172 "data_offset": 0, 00:20:15.172 "data_size": 65536 00:20:15.172 }, 00:20:15.172 { 00:20:15.172 "name": "BaseBdev3", 00:20:15.172 "uuid": "5c6c72e4-865e-4ab1-a6f6-4b2905051b3a", 00:20:15.172 "is_configured": true, 00:20:15.172 "data_offset": 0, 00:20:15.172 "data_size": 65536 00:20:15.172 }, 00:20:15.172 { 00:20:15.172 "name": "BaseBdev4", 00:20:15.172 "uuid": "e066d6c5-16fe-4e67-98f8-0ee933f6bd22", 00:20:15.172 "is_configured": true, 00:20:15.172 "data_offset": 0, 00:20:15.172 "data_size": 65536 00:20:15.172 } 00:20:15.172 ] 00:20:15.172 }' 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.172 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.739 [2024-11-20 07:17:12.950985] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.739 07:17:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.739 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.739 "name": "Existed_Raid", 00:20:15.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.739 "strip_size_kb": 64, 00:20:15.739 "state": "configuring", 00:20:15.739 "raid_level": "raid5f", 00:20:15.739 "superblock": false, 00:20:15.739 "num_base_bdevs": 4, 00:20:15.739 "num_base_bdevs_discovered": 2, 00:20:15.739 "num_base_bdevs_operational": 4, 00:20:15.739 "base_bdevs_list": [ 00:20:15.739 { 00:20:15.739 "name": "BaseBdev1", 00:20:15.739 "uuid": "3da620b9-4d04-4306-b7e4-42e7ac630897", 00:20:15.739 "is_configured": true, 00:20:15.739 "data_offset": 0, 00:20:15.739 "data_size": 65536 00:20:15.739 }, 00:20:15.739 { 00:20:15.739 "name": null, 00:20:15.739 "uuid": "a1961ab6-1ecd-4d76-a57b-55c31e967d0a", 00:20:15.739 "is_configured": false, 00:20:15.739 "data_offset": 0, 00:20:15.739 "data_size": 65536 00:20:15.739 }, 00:20:15.739 { 00:20:15.739 "name": null, 00:20:15.739 "uuid": "5c6c72e4-865e-4ab1-a6f6-4b2905051b3a", 00:20:15.739 "is_configured": false, 00:20:15.739 "data_offset": 0, 00:20:15.739 "data_size": 65536 00:20:15.739 }, 00:20:15.739 { 00:20:15.739 "name": "BaseBdev4", 00:20:15.739 "uuid": "e066d6c5-16fe-4e67-98f8-0ee933f6bd22", 00:20:15.739 "is_configured": true, 00:20:15.739 "data_offset": 0, 00:20:15.739 "data_size": 65536 00:20:15.739 } 00:20:15.739 ] 00:20:15.739 }' 00:20:15.739 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.739 07:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.329 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.329 07:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.330 [2024-11-20 07:17:13.547221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.330 "name": "Existed_Raid", 00:20:16.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.330 "strip_size_kb": 64, 00:20:16.330 "state": "configuring", 00:20:16.330 "raid_level": "raid5f", 00:20:16.330 "superblock": false, 00:20:16.330 "num_base_bdevs": 4, 00:20:16.330 "num_base_bdevs_discovered": 3, 00:20:16.330 "num_base_bdevs_operational": 4, 00:20:16.330 "base_bdevs_list": [ 00:20:16.330 { 00:20:16.330 "name": "BaseBdev1", 00:20:16.330 "uuid": "3da620b9-4d04-4306-b7e4-42e7ac630897", 00:20:16.330 "is_configured": true, 00:20:16.330 "data_offset": 0, 00:20:16.330 "data_size": 65536 00:20:16.330 }, 00:20:16.330 { 00:20:16.330 "name": null, 00:20:16.330 "uuid": "a1961ab6-1ecd-4d76-a57b-55c31e967d0a", 00:20:16.330 "is_configured": false, 00:20:16.330 "data_offset": 0, 00:20:16.330 "data_size": 65536 00:20:16.330 }, 00:20:16.330 { 00:20:16.330 "name": "BaseBdev3", 00:20:16.330 "uuid": "5c6c72e4-865e-4ab1-a6f6-4b2905051b3a", 00:20:16.330 "is_configured": true, 00:20:16.330 "data_offset": 0, 00:20:16.330 "data_size": 65536 00:20:16.330 }, 00:20:16.330 { 00:20:16.330 "name": "BaseBdev4", 00:20:16.330 "uuid": "e066d6c5-16fe-4e67-98f8-0ee933f6bd22", 00:20:16.330 "is_configured": true, 00:20:16.330 "data_offset": 0, 00:20:16.330 "data_size": 65536 00:20:16.330 } 00:20:16.330 ] 00:20:16.330 }' 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.330 07:17:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.895 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:16.895 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.895 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.895 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.895 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.895 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:16.895 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:16.895 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.895 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:16.895 [2024-11-20 07:17:14.163517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:17.153 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.153 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:17.153 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.153 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.153 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.153 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.153 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:17.153 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.153 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.154 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.154 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.154 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.154 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.154 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.154 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.154 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.154 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.154 "name": "Existed_Raid", 00:20:17.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.154 "strip_size_kb": 64, 00:20:17.154 "state": "configuring", 00:20:17.154 "raid_level": "raid5f", 00:20:17.154 "superblock": false, 00:20:17.154 "num_base_bdevs": 4, 00:20:17.154 "num_base_bdevs_discovered": 2, 00:20:17.154 "num_base_bdevs_operational": 4, 00:20:17.154 "base_bdevs_list": [ 00:20:17.154 { 00:20:17.154 "name": null, 00:20:17.154 "uuid": "3da620b9-4d04-4306-b7e4-42e7ac630897", 00:20:17.154 "is_configured": false, 00:20:17.154 "data_offset": 0, 00:20:17.154 "data_size": 65536 00:20:17.154 }, 00:20:17.154 { 00:20:17.154 "name": null, 00:20:17.154 "uuid": "a1961ab6-1ecd-4d76-a57b-55c31e967d0a", 00:20:17.154 "is_configured": false, 00:20:17.154 "data_offset": 0, 00:20:17.154 "data_size": 65536 00:20:17.154 }, 00:20:17.154 { 00:20:17.154 "name": "BaseBdev3", 00:20:17.154 "uuid": "5c6c72e4-865e-4ab1-a6f6-4b2905051b3a", 00:20:17.154 "is_configured": true, 00:20:17.154 "data_offset": 0, 00:20:17.154 "data_size": 65536 00:20:17.154 }, 00:20:17.154 { 00:20:17.154 "name": "BaseBdev4", 00:20:17.154 "uuid": "e066d6c5-16fe-4e67-98f8-0ee933f6bd22", 00:20:17.154 "is_configured": true, 00:20:17.154 "data_offset": 0, 00:20:17.154 "data_size": 65536 00:20:17.154 } 00:20:17.154 ] 00:20:17.154 }' 00:20:17.154 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.154 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.720 [2024-11-20 07:17:14.817962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.720 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.721 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.721 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.721 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.721 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.721 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.721 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.721 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.721 "name": "Existed_Raid", 00:20:17.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.721 "strip_size_kb": 64, 00:20:17.721 "state": "configuring", 00:20:17.721 "raid_level": "raid5f", 00:20:17.721 "superblock": false, 00:20:17.721 "num_base_bdevs": 4, 00:20:17.721 "num_base_bdevs_discovered": 3, 00:20:17.721 "num_base_bdevs_operational": 4, 00:20:17.721 "base_bdevs_list": [ 00:20:17.721 { 00:20:17.721 "name": null, 00:20:17.721 "uuid": "3da620b9-4d04-4306-b7e4-42e7ac630897", 00:20:17.721 "is_configured": false, 00:20:17.721 "data_offset": 0, 00:20:17.721 "data_size": 65536 00:20:17.721 }, 00:20:17.721 { 00:20:17.721 "name": "BaseBdev2", 00:20:17.721 "uuid": "a1961ab6-1ecd-4d76-a57b-55c31e967d0a", 00:20:17.721 "is_configured": true, 00:20:17.721 "data_offset": 0, 00:20:17.721 "data_size": 65536 00:20:17.721 }, 00:20:17.721 { 00:20:17.721 "name": "BaseBdev3", 00:20:17.721 "uuid": "5c6c72e4-865e-4ab1-a6f6-4b2905051b3a", 00:20:17.721 "is_configured": true, 00:20:17.721 "data_offset": 0, 00:20:17.721 "data_size": 65536 00:20:17.721 }, 00:20:17.721 { 00:20:17.721 "name": "BaseBdev4", 00:20:17.721 "uuid": "e066d6c5-16fe-4e67-98f8-0ee933f6bd22", 00:20:17.721 "is_configured": true, 00:20:17.721 "data_offset": 0, 00:20:17.721 "data_size": 65536 00:20:17.721 } 00:20:17.721 ] 00:20:17.721 }' 00:20:17.721 07:17:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.721 07:17:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.289 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:18.289 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.289 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.289 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.289 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.289 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:18.289 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3da620b9-4d04-4306-b7e4-42e7ac630897 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.290 [2024-11-20 07:17:15.537406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:18.290 [2024-11-20 07:17:15.537476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:18.290 [2024-11-20 07:17:15.537490] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:18.290 [2024-11-20 07:17:15.537864] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:18.290 [2024-11-20 07:17:15.544598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:18.290 [2024-11-20 07:17:15.544629] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:18.290 [2024-11-20 07:17:15.544952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.290 NewBaseBdev 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.290 [ 00:20:18.290 { 00:20:18.290 "name": "NewBaseBdev", 00:20:18.290 "aliases": [ 00:20:18.290 "3da620b9-4d04-4306-b7e4-42e7ac630897" 00:20:18.290 ], 00:20:18.290 "product_name": "Malloc disk", 00:20:18.290 "block_size": 512, 00:20:18.290 "num_blocks": 65536, 00:20:18.290 "uuid": "3da620b9-4d04-4306-b7e4-42e7ac630897", 00:20:18.290 "assigned_rate_limits": { 00:20:18.290 "rw_ios_per_sec": 0, 00:20:18.290 "rw_mbytes_per_sec": 0, 00:20:18.290 "r_mbytes_per_sec": 0, 00:20:18.290 "w_mbytes_per_sec": 0 00:20:18.290 }, 00:20:18.290 "claimed": true, 00:20:18.290 "claim_type": "exclusive_write", 00:20:18.290 "zoned": false, 00:20:18.290 "supported_io_types": { 00:20:18.290 "read": true, 00:20:18.290 "write": true, 00:20:18.290 "unmap": true, 00:20:18.290 "flush": true, 00:20:18.290 "reset": true, 00:20:18.290 "nvme_admin": false, 00:20:18.290 "nvme_io": false, 00:20:18.290 "nvme_io_md": false, 00:20:18.290 "write_zeroes": true, 00:20:18.290 "zcopy": true, 00:20:18.290 "get_zone_info": false, 00:20:18.290 "zone_management": false, 00:20:18.290 "zone_append": false, 00:20:18.290 "compare": false, 00:20:18.290 "compare_and_write": false, 00:20:18.290 "abort": true, 00:20:18.290 "seek_hole": false, 00:20:18.290 "seek_data": false, 00:20:18.290 "copy": true, 00:20:18.290 "nvme_iov_md": false 00:20:18.290 }, 00:20:18.290 "memory_domains": [ 00:20:18.290 { 00:20:18.290 "dma_device_id": "system", 00:20:18.290 "dma_device_type": 1 00:20:18.290 }, 00:20:18.290 { 00:20:18.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:18.290 "dma_device_type": 2 00:20:18.290 } 00:20:18.290 ], 00:20:18.290 "driver_specific": {} 00:20:18.290 } 00:20:18.290 ] 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.290 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.580 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.580 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.580 "name": "Existed_Raid", 00:20:18.580 "uuid": "03ff7315-7e81-4e36-9928-1f133102173c", 00:20:18.580 "strip_size_kb": 64, 00:20:18.580 "state": "online", 00:20:18.580 "raid_level": "raid5f", 00:20:18.580 "superblock": false, 00:20:18.580 "num_base_bdevs": 4, 00:20:18.580 "num_base_bdevs_discovered": 4, 00:20:18.580 "num_base_bdevs_operational": 4, 00:20:18.580 "base_bdevs_list": [ 00:20:18.580 { 00:20:18.580 "name": "NewBaseBdev", 00:20:18.580 "uuid": "3da620b9-4d04-4306-b7e4-42e7ac630897", 00:20:18.580 "is_configured": true, 00:20:18.580 "data_offset": 0, 00:20:18.580 "data_size": 65536 00:20:18.580 }, 00:20:18.580 { 00:20:18.580 "name": "BaseBdev2", 00:20:18.580 "uuid": "a1961ab6-1ecd-4d76-a57b-55c31e967d0a", 00:20:18.580 "is_configured": true, 00:20:18.580 "data_offset": 0, 00:20:18.580 "data_size": 65536 00:20:18.580 }, 00:20:18.580 { 00:20:18.580 "name": "BaseBdev3", 00:20:18.580 "uuid": "5c6c72e4-865e-4ab1-a6f6-4b2905051b3a", 00:20:18.580 "is_configured": true, 00:20:18.580 "data_offset": 0, 00:20:18.580 "data_size": 65536 00:20:18.580 }, 00:20:18.580 { 00:20:18.580 "name": "BaseBdev4", 00:20:18.580 "uuid": "e066d6c5-16fe-4e67-98f8-0ee933f6bd22", 00:20:18.580 "is_configured": true, 00:20:18.580 "data_offset": 0, 00:20:18.580 "data_size": 65536 00:20:18.580 } 00:20:18.580 ] 00:20:18.580 }' 00:20:18.580 07:17:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.580 07:17:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.853 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:18.853 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:18.853 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:18.853 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:18.853 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:18.853 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:18.853 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:18.853 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:18.853 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.853 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.853 [2024-11-20 07:17:16.140846] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:18.853 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.112 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:19.112 "name": "Existed_Raid", 00:20:19.112 "aliases": [ 00:20:19.112 "03ff7315-7e81-4e36-9928-1f133102173c" 00:20:19.112 ], 00:20:19.112 "product_name": "Raid Volume", 00:20:19.112 "block_size": 512, 00:20:19.112 "num_blocks": 196608, 00:20:19.112 "uuid": "03ff7315-7e81-4e36-9928-1f133102173c", 00:20:19.112 "assigned_rate_limits": { 00:20:19.112 "rw_ios_per_sec": 0, 00:20:19.112 "rw_mbytes_per_sec": 0, 00:20:19.112 "r_mbytes_per_sec": 0, 00:20:19.112 "w_mbytes_per_sec": 0 00:20:19.112 }, 00:20:19.112 "claimed": false, 00:20:19.112 "zoned": false, 00:20:19.112 "supported_io_types": { 00:20:19.112 "read": true, 00:20:19.112 "write": true, 00:20:19.112 "unmap": false, 00:20:19.113 "flush": false, 00:20:19.113 "reset": true, 00:20:19.113 "nvme_admin": false, 00:20:19.113 "nvme_io": false, 00:20:19.113 "nvme_io_md": false, 00:20:19.113 "write_zeroes": true, 00:20:19.113 "zcopy": false, 00:20:19.113 "get_zone_info": false, 00:20:19.113 "zone_management": false, 00:20:19.113 "zone_append": false, 00:20:19.113 "compare": false, 00:20:19.113 "compare_and_write": false, 00:20:19.113 "abort": false, 00:20:19.113 "seek_hole": false, 00:20:19.113 "seek_data": false, 00:20:19.113 "copy": false, 00:20:19.113 "nvme_iov_md": false 00:20:19.113 }, 00:20:19.113 "driver_specific": { 00:20:19.113 "raid": { 00:20:19.113 "uuid": "03ff7315-7e81-4e36-9928-1f133102173c", 00:20:19.113 "strip_size_kb": 64, 00:20:19.113 "state": "online", 00:20:19.113 "raid_level": "raid5f", 00:20:19.113 "superblock": false, 00:20:19.113 "num_base_bdevs": 4, 00:20:19.113 "num_base_bdevs_discovered": 4, 00:20:19.113 "num_base_bdevs_operational": 4, 00:20:19.113 "base_bdevs_list": [ 00:20:19.113 { 00:20:19.113 "name": "NewBaseBdev", 00:20:19.113 "uuid": "3da620b9-4d04-4306-b7e4-42e7ac630897", 00:20:19.113 "is_configured": true, 00:20:19.113 "data_offset": 0, 00:20:19.113 "data_size": 65536 00:20:19.113 }, 00:20:19.113 { 00:20:19.113 "name": "BaseBdev2", 00:20:19.113 "uuid": "a1961ab6-1ecd-4d76-a57b-55c31e967d0a", 00:20:19.113 "is_configured": true, 00:20:19.113 "data_offset": 0, 00:20:19.113 "data_size": 65536 00:20:19.113 }, 00:20:19.113 { 00:20:19.113 "name": "BaseBdev3", 00:20:19.113 "uuid": "5c6c72e4-865e-4ab1-a6f6-4b2905051b3a", 00:20:19.113 "is_configured": true, 00:20:19.113 "data_offset": 0, 00:20:19.113 "data_size": 65536 00:20:19.113 }, 00:20:19.113 { 00:20:19.113 "name": "BaseBdev4", 00:20:19.113 "uuid": "e066d6c5-16fe-4e67-98f8-0ee933f6bd22", 00:20:19.113 "is_configured": true, 00:20:19.113 "data_offset": 0, 00:20:19.113 "data_size": 65536 00:20:19.113 } 00:20:19.113 ] 00:20:19.113 } 00:20:19.113 } 00:20:19.113 }' 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:19.113 BaseBdev2 00:20:19.113 BaseBdev3 00:20:19.113 BaseBdev4' 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.113 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:19.373 [2024-11-20 07:17:16.504622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:19.373 [2024-11-20 07:17:16.504672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:19.373 [2024-11-20 07:17:16.504793] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.373 [2024-11-20 07:17:16.505356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.373 [2024-11-20 07:17:16.505515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83075 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83075 ']' 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83075 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83075 00:20:19.373 killing process with pid 83075 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83075' 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83075 00:20:19.373 [2024-11-20 07:17:16.541732] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:19.373 07:17:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83075 00:20:19.632 [2024-11-20 07:17:16.909765] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:21.007 ************************************ 00:20:21.007 END TEST raid5f_state_function_test 00:20:21.007 ************************************ 00:20:21.007 07:17:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:20:21.007 00:20:21.007 real 0m13.206s 00:20:21.007 user 0m21.920s 00:20:21.007 sys 0m1.886s 00:20:21.007 07:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.007 07:17:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.007 07:17:18 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:20:21.007 07:17:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:21.007 07:17:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.007 07:17:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:21.007 ************************************ 00:20:21.007 START TEST raid5f_state_function_test_sb 00:20:21.007 ************************************ 00:20:21.007 07:17:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:20:21.007 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:20:21.007 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:21.008 Process raid pid: 83758 00:20:21.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83758 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83758' 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83758 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83758 ']' 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:21.008 07:17:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.008 [2024-11-20 07:17:18.152603] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:20:21.008 [2024-11-20 07:17:18.153048] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.267 [2024-11-20 07:17:18.343972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.267 [2024-11-20 07:17:18.521512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.527 [2024-11-20 07:17:18.750345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.527 [2024-11-20 07:17:18.751708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:21.790 [2024-11-20 07:17:19.078285] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:21.790 [2024-11-20 07:17:19.078391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:21.790 [2024-11-20 07:17:19.078414] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:21.790 [2024-11-20 07:17:19.078435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:21.790 [2024-11-20 07:17:19.078447] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:21.790 [2024-11-20 07:17:19.078466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:21.790 [2024-11-20 07:17:19.078478] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:21.790 [2024-11-20 07:17:19.078496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.790 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.065 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.065 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.065 "name": "Existed_Raid", 00:20:22.065 "uuid": "ace85358-7872-4838-93e6-18ed2799d2c6", 00:20:22.066 "strip_size_kb": 64, 00:20:22.066 "state": "configuring", 00:20:22.066 "raid_level": "raid5f", 00:20:22.066 "superblock": true, 00:20:22.066 "num_base_bdevs": 4, 00:20:22.066 "num_base_bdevs_discovered": 0, 00:20:22.066 "num_base_bdevs_operational": 4, 00:20:22.066 "base_bdevs_list": [ 00:20:22.066 { 00:20:22.066 "name": "BaseBdev1", 00:20:22.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.066 "is_configured": false, 00:20:22.066 "data_offset": 0, 00:20:22.066 "data_size": 0 00:20:22.066 }, 00:20:22.066 { 00:20:22.066 "name": "BaseBdev2", 00:20:22.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.066 "is_configured": false, 00:20:22.066 "data_offset": 0, 00:20:22.066 "data_size": 0 00:20:22.066 }, 00:20:22.066 { 00:20:22.066 "name": "BaseBdev3", 00:20:22.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.066 "is_configured": false, 00:20:22.066 "data_offset": 0, 00:20:22.066 "data_size": 0 00:20:22.066 }, 00:20:22.066 { 00:20:22.066 "name": "BaseBdev4", 00:20:22.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.066 "is_configured": false, 00:20:22.066 "data_offset": 0, 00:20:22.066 "data_size": 0 00:20:22.066 } 00:20:22.066 ] 00:20:22.066 }' 00:20:22.066 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.067 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.348 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.349 [2024-11-20 07:17:19.582354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:22.349 [2024-11-20 07:17:19.582404] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.349 [2024-11-20 07:17:19.594322] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:22.349 [2024-11-20 07:17:19.594525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:22.349 [2024-11-20 07:17:19.594657] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:22.349 [2024-11-20 07:17:19.594722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:22.349 [2024-11-20 07:17:19.594961] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:22.349 [2024-11-20 07:17:19.595028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:22.349 [2024-11-20 07:17:19.595199] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:22.349 [2024-11-20 07:17:19.595338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.349 [2024-11-20 07:17:19.644376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:22.349 BaseBdev1 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.349 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.620 [ 00:20:22.620 { 00:20:22.620 "name": "BaseBdev1", 00:20:22.620 "aliases": [ 00:20:22.620 "26fde6a6-a856-47ff-9bb7-3a23eba60085" 00:20:22.620 ], 00:20:22.620 "product_name": "Malloc disk", 00:20:22.620 "block_size": 512, 00:20:22.620 "num_blocks": 65536, 00:20:22.620 "uuid": "26fde6a6-a856-47ff-9bb7-3a23eba60085", 00:20:22.620 "assigned_rate_limits": { 00:20:22.620 "rw_ios_per_sec": 0, 00:20:22.620 "rw_mbytes_per_sec": 0, 00:20:22.620 "r_mbytes_per_sec": 0, 00:20:22.620 "w_mbytes_per_sec": 0 00:20:22.620 }, 00:20:22.620 "claimed": true, 00:20:22.620 "claim_type": "exclusive_write", 00:20:22.620 "zoned": false, 00:20:22.620 "supported_io_types": { 00:20:22.620 "read": true, 00:20:22.620 "write": true, 00:20:22.620 "unmap": true, 00:20:22.620 "flush": true, 00:20:22.620 "reset": true, 00:20:22.620 "nvme_admin": false, 00:20:22.620 "nvme_io": false, 00:20:22.620 "nvme_io_md": false, 00:20:22.620 "write_zeroes": true, 00:20:22.620 "zcopy": true, 00:20:22.620 "get_zone_info": false, 00:20:22.620 "zone_management": false, 00:20:22.620 "zone_append": false, 00:20:22.620 "compare": false, 00:20:22.620 "compare_and_write": false, 00:20:22.620 "abort": true, 00:20:22.620 "seek_hole": false, 00:20:22.620 "seek_data": false, 00:20:22.620 "copy": true, 00:20:22.620 "nvme_iov_md": false 00:20:22.620 }, 00:20:22.620 "memory_domains": [ 00:20:22.620 { 00:20:22.620 "dma_device_id": "system", 00:20:22.620 "dma_device_type": 1 00:20:22.620 }, 00:20:22.620 { 00:20:22.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.620 "dma_device_type": 2 00:20:22.620 } 00:20:22.620 ], 00:20:22.620 "driver_specific": {} 00:20:22.620 } 00:20:22.620 ] 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:22.620 "name": "Existed_Raid", 00:20:22.620 "uuid": "5cc30f9b-7d42-4847-a11f-1666db583545", 00:20:22.620 "strip_size_kb": 64, 00:20:22.620 "state": "configuring", 00:20:22.620 "raid_level": "raid5f", 00:20:22.620 "superblock": true, 00:20:22.620 "num_base_bdevs": 4, 00:20:22.620 "num_base_bdevs_discovered": 1, 00:20:22.620 "num_base_bdevs_operational": 4, 00:20:22.620 "base_bdevs_list": [ 00:20:22.620 { 00:20:22.620 "name": "BaseBdev1", 00:20:22.620 "uuid": "26fde6a6-a856-47ff-9bb7-3a23eba60085", 00:20:22.620 "is_configured": true, 00:20:22.620 "data_offset": 2048, 00:20:22.620 "data_size": 63488 00:20:22.620 }, 00:20:22.620 { 00:20:22.620 "name": "BaseBdev2", 00:20:22.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.620 "is_configured": false, 00:20:22.620 "data_offset": 0, 00:20:22.620 "data_size": 0 00:20:22.620 }, 00:20:22.620 { 00:20:22.620 "name": "BaseBdev3", 00:20:22.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.620 "is_configured": false, 00:20:22.620 "data_offset": 0, 00:20:22.620 "data_size": 0 00:20:22.620 }, 00:20:22.620 { 00:20:22.620 "name": "BaseBdev4", 00:20:22.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:22.620 "is_configured": false, 00:20:22.620 "data_offset": 0, 00:20:22.620 "data_size": 0 00:20:22.620 } 00:20:22.620 ] 00:20:22.620 }' 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:22.620 07:17:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.894 [2024-11-20 07:17:20.192564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:22.894 [2024-11-20 07:17:20.192632] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.894 [2024-11-20 07:17:20.200630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:22.894 [2024-11-20 07:17:20.203120] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:22.894 [2024-11-20 07:17:20.203177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:22.894 [2024-11-20 07:17:20.203194] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:22.894 [2024-11-20 07:17:20.203212] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:22.894 [2024-11-20 07:17:20.203222] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:20:22.894 [2024-11-20 07:17:20.203236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:22.894 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.157 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.157 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.157 "name": "Existed_Raid", 00:20:23.157 "uuid": "3bfa3c59-5878-431c-b398-3afa68f61f96", 00:20:23.157 "strip_size_kb": 64, 00:20:23.157 "state": "configuring", 00:20:23.157 "raid_level": "raid5f", 00:20:23.157 "superblock": true, 00:20:23.157 "num_base_bdevs": 4, 00:20:23.157 "num_base_bdevs_discovered": 1, 00:20:23.157 "num_base_bdevs_operational": 4, 00:20:23.157 "base_bdevs_list": [ 00:20:23.157 { 00:20:23.157 "name": "BaseBdev1", 00:20:23.157 "uuid": "26fde6a6-a856-47ff-9bb7-3a23eba60085", 00:20:23.157 "is_configured": true, 00:20:23.157 "data_offset": 2048, 00:20:23.157 "data_size": 63488 00:20:23.157 }, 00:20:23.157 { 00:20:23.157 "name": "BaseBdev2", 00:20:23.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.157 "is_configured": false, 00:20:23.157 "data_offset": 0, 00:20:23.157 "data_size": 0 00:20:23.157 }, 00:20:23.157 { 00:20:23.157 "name": "BaseBdev3", 00:20:23.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.157 "is_configured": false, 00:20:23.157 "data_offset": 0, 00:20:23.157 "data_size": 0 00:20:23.157 }, 00:20:23.157 { 00:20:23.157 "name": "BaseBdev4", 00:20:23.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.157 "is_configured": false, 00:20:23.157 "data_offset": 0, 00:20:23.157 "data_size": 0 00:20:23.157 } 00:20:23.157 ] 00:20:23.157 }' 00:20:23.157 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.157 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.415 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:23.415 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.415 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.675 [2024-11-20 07:17:20.755935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:23.675 BaseBdev2 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.675 [ 00:20:23.675 { 00:20:23.675 "name": "BaseBdev2", 00:20:23.675 "aliases": [ 00:20:23.675 "f5e87e7a-09e6-4e1d-a50d-bab2f86dcc30" 00:20:23.675 ], 00:20:23.675 "product_name": "Malloc disk", 00:20:23.675 "block_size": 512, 00:20:23.675 "num_blocks": 65536, 00:20:23.675 "uuid": "f5e87e7a-09e6-4e1d-a50d-bab2f86dcc30", 00:20:23.675 "assigned_rate_limits": { 00:20:23.675 "rw_ios_per_sec": 0, 00:20:23.675 "rw_mbytes_per_sec": 0, 00:20:23.675 "r_mbytes_per_sec": 0, 00:20:23.675 "w_mbytes_per_sec": 0 00:20:23.675 }, 00:20:23.675 "claimed": true, 00:20:23.675 "claim_type": "exclusive_write", 00:20:23.675 "zoned": false, 00:20:23.675 "supported_io_types": { 00:20:23.675 "read": true, 00:20:23.675 "write": true, 00:20:23.675 "unmap": true, 00:20:23.675 "flush": true, 00:20:23.675 "reset": true, 00:20:23.675 "nvme_admin": false, 00:20:23.675 "nvme_io": false, 00:20:23.675 "nvme_io_md": false, 00:20:23.675 "write_zeroes": true, 00:20:23.675 "zcopy": true, 00:20:23.675 "get_zone_info": false, 00:20:23.675 "zone_management": false, 00:20:23.675 "zone_append": false, 00:20:23.675 "compare": false, 00:20:23.675 "compare_and_write": false, 00:20:23.675 "abort": true, 00:20:23.675 "seek_hole": false, 00:20:23.675 "seek_data": false, 00:20:23.675 "copy": true, 00:20:23.675 "nvme_iov_md": false 00:20:23.675 }, 00:20:23.675 "memory_domains": [ 00:20:23.675 { 00:20:23.675 "dma_device_id": "system", 00:20:23.675 "dma_device_type": 1 00:20:23.675 }, 00:20:23.675 { 00:20:23.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.675 "dma_device_type": 2 00:20:23.675 } 00:20:23.675 ], 00:20:23.675 "driver_specific": {} 00:20:23.675 } 00:20:23.675 ] 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.675 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.675 "name": "Existed_Raid", 00:20:23.675 "uuid": "3bfa3c59-5878-431c-b398-3afa68f61f96", 00:20:23.675 "strip_size_kb": 64, 00:20:23.675 "state": "configuring", 00:20:23.675 "raid_level": "raid5f", 00:20:23.675 "superblock": true, 00:20:23.675 "num_base_bdevs": 4, 00:20:23.675 "num_base_bdevs_discovered": 2, 00:20:23.675 "num_base_bdevs_operational": 4, 00:20:23.675 "base_bdevs_list": [ 00:20:23.675 { 00:20:23.675 "name": "BaseBdev1", 00:20:23.675 "uuid": "26fde6a6-a856-47ff-9bb7-3a23eba60085", 00:20:23.675 "is_configured": true, 00:20:23.675 "data_offset": 2048, 00:20:23.675 "data_size": 63488 00:20:23.675 }, 00:20:23.675 { 00:20:23.675 "name": "BaseBdev2", 00:20:23.675 "uuid": "f5e87e7a-09e6-4e1d-a50d-bab2f86dcc30", 00:20:23.675 "is_configured": true, 00:20:23.675 "data_offset": 2048, 00:20:23.675 "data_size": 63488 00:20:23.675 }, 00:20:23.675 { 00:20:23.675 "name": "BaseBdev3", 00:20:23.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.675 "is_configured": false, 00:20:23.675 "data_offset": 0, 00:20:23.675 "data_size": 0 00:20:23.676 }, 00:20:23.676 { 00:20:23.676 "name": "BaseBdev4", 00:20:23.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.676 "is_configured": false, 00:20:23.676 "data_offset": 0, 00:20:23.676 "data_size": 0 00:20:23.676 } 00:20:23.676 ] 00:20:23.676 }' 00:20:23.676 07:17:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.676 07:17:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.242 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.243 [2024-11-20 07:17:21.403636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:24.243 BaseBdev3 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.243 [ 00:20:24.243 { 00:20:24.243 "name": "BaseBdev3", 00:20:24.243 "aliases": [ 00:20:24.243 "286a8ff7-52d6-4079-ae30-81939722abbd" 00:20:24.243 ], 00:20:24.243 "product_name": "Malloc disk", 00:20:24.243 "block_size": 512, 00:20:24.243 "num_blocks": 65536, 00:20:24.243 "uuid": "286a8ff7-52d6-4079-ae30-81939722abbd", 00:20:24.243 "assigned_rate_limits": { 00:20:24.243 "rw_ios_per_sec": 0, 00:20:24.243 "rw_mbytes_per_sec": 0, 00:20:24.243 "r_mbytes_per_sec": 0, 00:20:24.243 "w_mbytes_per_sec": 0 00:20:24.243 }, 00:20:24.243 "claimed": true, 00:20:24.243 "claim_type": "exclusive_write", 00:20:24.243 "zoned": false, 00:20:24.243 "supported_io_types": { 00:20:24.243 "read": true, 00:20:24.243 "write": true, 00:20:24.243 "unmap": true, 00:20:24.243 "flush": true, 00:20:24.243 "reset": true, 00:20:24.243 "nvme_admin": false, 00:20:24.243 "nvme_io": false, 00:20:24.243 "nvme_io_md": false, 00:20:24.243 "write_zeroes": true, 00:20:24.243 "zcopy": true, 00:20:24.243 "get_zone_info": false, 00:20:24.243 "zone_management": false, 00:20:24.243 "zone_append": false, 00:20:24.243 "compare": false, 00:20:24.243 "compare_and_write": false, 00:20:24.243 "abort": true, 00:20:24.243 "seek_hole": false, 00:20:24.243 "seek_data": false, 00:20:24.243 "copy": true, 00:20:24.243 "nvme_iov_md": false 00:20:24.243 }, 00:20:24.243 "memory_domains": [ 00:20:24.243 { 00:20:24.243 "dma_device_id": "system", 00:20:24.243 "dma_device_type": 1 00:20:24.243 }, 00:20:24.243 { 00:20:24.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.243 "dma_device_type": 2 00:20:24.243 } 00:20:24.243 ], 00:20:24.243 "driver_specific": {} 00:20:24.243 } 00:20:24.243 ] 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.243 "name": "Existed_Raid", 00:20:24.243 "uuid": "3bfa3c59-5878-431c-b398-3afa68f61f96", 00:20:24.243 "strip_size_kb": 64, 00:20:24.243 "state": "configuring", 00:20:24.243 "raid_level": "raid5f", 00:20:24.243 "superblock": true, 00:20:24.243 "num_base_bdevs": 4, 00:20:24.243 "num_base_bdevs_discovered": 3, 00:20:24.243 "num_base_bdevs_operational": 4, 00:20:24.243 "base_bdevs_list": [ 00:20:24.243 { 00:20:24.243 "name": "BaseBdev1", 00:20:24.243 "uuid": "26fde6a6-a856-47ff-9bb7-3a23eba60085", 00:20:24.243 "is_configured": true, 00:20:24.243 "data_offset": 2048, 00:20:24.243 "data_size": 63488 00:20:24.243 }, 00:20:24.243 { 00:20:24.243 "name": "BaseBdev2", 00:20:24.243 "uuid": "f5e87e7a-09e6-4e1d-a50d-bab2f86dcc30", 00:20:24.243 "is_configured": true, 00:20:24.243 "data_offset": 2048, 00:20:24.243 "data_size": 63488 00:20:24.243 }, 00:20:24.243 { 00:20:24.243 "name": "BaseBdev3", 00:20:24.243 "uuid": "286a8ff7-52d6-4079-ae30-81939722abbd", 00:20:24.243 "is_configured": true, 00:20:24.243 "data_offset": 2048, 00:20:24.243 "data_size": 63488 00:20:24.243 }, 00:20:24.243 { 00:20:24.243 "name": "BaseBdev4", 00:20:24.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:24.243 "is_configured": false, 00:20:24.243 "data_offset": 0, 00:20:24.243 "data_size": 0 00:20:24.243 } 00:20:24.243 ] 00:20:24.243 }' 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.243 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.812 07:17:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:24.812 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.812 07:17:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.812 [2024-11-20 07:17:22.015210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:24.812 [2024-11-20 07:17:22.015810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:24.812 [2024-11-20 07:17:22.015837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:24.812 [2024-11-20 07:17:22.016231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:24.812 BaseBdev4 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.812 [2024-11-20 07:17:22.023144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:24.812 [2024-11-20 07:17:22.023177] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:24.812 [2024-11-20 07:17:22.023492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.812 [ 00:20:24.812 { 00:20:24.812 "name": "BaseBdev4", 00:20:24.812 "aliases": [ 00:20:24.812 "0f13863a-7e03-43df-a306-316e4c7a58c5" 00:20:24.812 ], 00:20:24.812 "product_name": "Malloc disk", 00:20:24.812 "block_size": 512, 00:20:24.812 "num_blocks": 65536, 00:20:24.812 "uuid": "0f13863a-7e03-43df-a306-316e4c7a58c5", 00:20:24.812 "assigned_rate_limits": { 00:20:24.812 "rw_ios_per_sec": 0, 00:20:24.812 "rw_mbytes_per_sec": 0, 00:20:24.812 "r_mbytes_per_sec": 0, 00:20:24.812 "w_mbytes_per_sec": 0 00:20:24.812 }, 00:20:24.812 "claimed": true, 00:20:24.812 "claim_type": "exclusive_write", 00:20:24.812 "zoned": false, 00:20:24.812 "supported_io_types": { 00:20:24.812 "read": true, 00:20:24.812 "write": true, 00:20:24.812 "unmap": true, 00:20:24.812 "flush": true, 00:20:24.812 "reset": true, 00:20:24.812 "nvme_admin": false, 00:20:24.812 "nvme_io": false, 00:20:24.812 "nvme_io_md": false, 00:20:24.812 "write_zeroes": true, 00:20:24.812 "zcopy": true, 00:20:24.812 "get_zone_info": false, 00:20:24.812 "zone_management": false, 00:20:24.812 "zone_append": false, 00:20:24.812 "compare": false, 00:20:24.812 "compare_and_write": false, 00:20:24.812 "abort": true, 00:20:24.812 "seek_hole": false, 00:20:24.812 "seek_data": false, 00:20:24.812 "copy": true, 00:20:24.812 "nvme_iov_md": false 00:20:24.812 }, 00:20:24.812 "memory_domains": [ 00:20:24.812 { 00:20:24.812 "dma_device_id": "system", 00:20:24.812 "dma_device_type": 1 00:20:24.812 }, 00:20:24.812 { 00:20:24.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:24.812 "dma_device_type": 2 00:20:24.812 } 00:20:24.812 ], 00:20:24.812 "driver_specific": {} 00:20:24.812 } 00:20:24.812 ] 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:24.812 "name": "Existed_Raid", 00:20:24.812 "uuid": "3bfa3c59-5878-431c-b398-3afa68f61f96", 00:20:24.812 "strip_size_kb": 64, 00:20:24.812 "state": "online", 00:20:24.812 "raid_level": "raid5f", 00:20:24.812 "superblock": true, 00:20:24.812 "num_base_bdevs": 4, 00:20:24.812 "num_base_bdevs_discovered": 4, 00:20:24.812 "num_base_bdevs_operational": 4, 00:20:24.812 "base_bdevs_list": [ 00:20:24.812 { 00:20:24.812 "name": "BaseBdev1", 00:20:24.812 "uuid": "26fde6a6-a856-47ff-9bb7-3a23eba60085", 00:20:24.812 "is_configured": true, 00:20:24.812 "data_offset": 2048, 00:20:24.812 "data_size": 63488 00:20:24.812 }, 00:20:24.812 { 00:20:24.812 "name": "BaseBdev2", 00:20:24.812 "uuid": "f5e87e7a-09e6-4e1d-a50d-bab2f86dcc30", 00:20:24.812 "is_configured": true, 00:20:24.812 "data_offset": 2048, 00:20:24.812 "data_size": 63488 00:20:24.812 }, 00:20:24.812 { 00:20:24.812 "name": "BaseBdev3", 00:20:24.812 "uuid": "286a8ff7-52d6-4079-ae30-81939722abbd", 00:20:24.812 "is_configured": true, 00:20:24.812 "data_offset": 2048, 00:20:24.812 "data_size": 63488 00:20:24.812 }, 00:20:24.812 { 00:20:24.812 "name": "BaseBdev4", 00:20:24.812 "uuid": "0f13863a-7e03-43df-a306-316e4c7a58c5", 00:20:24.812 "is_configured": true, 00:20:24.812 "data_offset": 2048, 00:20:24.812 "data_size": 63488 00:20:24.812 } 00:20:24.812 ] 00:20:24.812 }' 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:24.812 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.380 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:25.380 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:25.380 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:25.380 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:25.380 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:25.380 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:25.380 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:25.380 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.380 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.380 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:25.380 [2024-11-20 07:17:22.603285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:25.380 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.380 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:25.380 "name": "Existed_Raid", 00:20:25.380 "aliases": [ 00:20:25.380 "3bfa3c59-5878-431c-b398-3afa68f61f96" 00:20:25.380 ], 00:20:25.380 "product_name": "Raid Volume", 00:20:25.380 "block_size": 512, 00:20:25.380 "num_blocks": 190464, 00:20:25.380 "uuid": "3bfa3c59-5878-431c-b398-3afa68f61f96", 00:20:25.380 "assigned_rate_limits": { 00:20:25.380 "rw_ios_per_sec": 0, 00:20:25.380 "rw_mbytes_per_sec": 0, 00:20:25.380 "r_mbytes_per_sec": 0, 00:20:25.380 "w_mbytes_per_sec": 0 00:20:25.380 }, 00:20:25.380 "claimed": false, 00:20:25.380 "zoned": false, 00:20:25.380 "supported_io_types": { 00:20:25.380 "read": true, 00:20:25.380 "write": true, 00:20:25.380 "unmap": false, 00:20:25.380 "flush": false, 00:20:25.380 "reset": true, 00:20:25.380 "nvme_admin": false, 00:20:25.380 "nvme_io": false, 00:20:25.380 "nvme_io_md": false, 00:20:25.380 "write_zeroes": true, 00:20:25.380 "zcopy": false, 00:20:25.380 "get_zone_info": false, 00:20:25.380 "zone_management": false, 00:20:25.380 "zone_append": false, 00:20:25.380 "compare": false, 00:20:25.380 "compare_and_write": false, 00:20:25.380 "abort": false, 00:20:25.380 "seek_hole": false, 00:20:25.380 "seek_data": false, 00:20:25.380 "copy": false, 00:20:25.380 "nvme_iov_md": false 00:20:25.380 }, 00:20:25.380 "driver_specific": { 00:20:25.380 "raid": { 00:20:25.380 "uuid": "3bfa3c59-5878-431c-b398-3afa68f61f96", 00:20:25.380 "strip_size_kb": 64, 00:20:25.380 "state": "online", 00:20:25.380 "raid_level": "raid5f", 00:20:25.380 "superblock": true, 00:20:25.380 "num_base_bdevs": 4, 00:20:25.380 "num_base_bdevs_discovered": 4, 00:20:25.380 "num_base_bdevs_operational": 4, 00:20:25.380 "base_bdevs_list": [ 00:20:25.380 { 00:20:25.380 "name": "BaseBdev1", 00:20:25.380 "uuid": "26fde6a6-a856-47ff-9bb7-3a23eba60085", 00:20:25.380 "is_configured": true, 00:20:25.380 "data_offset": 2048, 00:20:25.380 "data_size": 63488 00:20:25.380 }, 00:20:25.380 { 00:20:25.380 "name": "BaseBdev2", 00:20:25.380 "uuid": "f5e87e7a-09e6-4e1d-a50d-bab2f86dcc30", 00:20:25.380 "is_configured": true, 00:20:25.380 "data_offset": 2048, 00:20:25.380 "data_size": 63488 00:20:25.380 }, 00:20:25.380 { 00:20:25.380 "name": "BaseBdev3", 00:20:25.380 "uuid": "286a8ff7-52d6-4079-ae30-81939722abbd", 00:20:25.380 "is_configured": true, 00:20:25.380 "data_offset": 2048, 00:20:25.380 "data_size": 63488 00:20:25.381 }, 00:20:25.381 { 00:20:25.381 "name": "BaseBdev4", 00:20:25.381 "uuid": "0f13863a-7e03-43df-a306-316e4c7a58c5", 00:20:25.381 "is_configured": true, 00:20:25.381 "data_offset": 2048, 00:20:25.381 "data_size": 63488 00:20:25.381 } 00:20:25.381 ] 00:20:25.381 } 00:20:25.381 } 00:20:25.381 }' 00:20:25.381 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:25.639 BaseBdev2 00:20:25.639 BaseBdev3 00:20:25.639 BaseBdev4' 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.639 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.640 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:25.640 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:25.640 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:25.640 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.640 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.640 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.899 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:25.899 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:25.899 07:17:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:25.899 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.899 07:17:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.899 [2024-11-20 07:17:22.975196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:25.899 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.900 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.900 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.900 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.900 "name": "Existed_Raid", 00:20:25.900 "uuid": "3bfa3c59-5878-431c-b398-3afa68f61f96", 00:20:25.900 "strip_size_kb": 64, 00:20:25.900 "state": "online", 00:20:25.900 "raid_level": "raid5f", 00:20:25.900 "superblock": true, 00:20:25.900 "num_base_bdevs": 4, 00:20:25.900 "num_base_bdevs_discovered": 3, 00:20:25.900 "num_base_bdevs_operational": 3, 00:20:25.900 "base_bdevs_list": [ 00:20:25.900 { 00:20:25.900 "name": null, 00:20:25.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.900 "is_configured": false, 00:20:25.900 "data_offset": 0, 00:20:25.900 "data_size": 63488 00:20:25.900 }, 00:20:25.900 { 00:20:25.900 "name": "BaseBdev2", 00:20:25.900 "uuid": "f5e87e7a-09e6-4e1d-a50d-bab2f86dcc30", 00:20:25.900 "is_configured": true, 00:20:25.900 "data_offset": 2048, 00:20:25.900 "data_size": 63488 00:20:25.900 }, 00:20:25.900 { 00:20:25.900 "name": "BaseBdev3", 00:20:25.900 "uuid": "286a8ff7-52d6-4079-ae30-81939722abbd", 00:20:25.900 "is_configured": true, 00:20:25.900 "data_offset": 2048, 00:20:25.900 "data_size": 63488 00:20:25.900 }, 00:20:25.900 { 00:20:25.900 "name": "BaseBdev4", 00:20:25.900 "uuid": "0f13863a-7e03-43df-a306-316e4c7a58c5", 00:20:25.900 "is_configured": true, 00:20:25.900 "data_offset": 2048, 00:20:25.900 "data_size": 63488 00:20:25.900 } 00:20:25.900 ] 00:20:25.900 }' 00:20:25.900 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.900 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.529 [2024-11-20 07:17:23.614055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:26.529 [2024-11-20 07:17:23.614269] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:26.529 [2024-11-20 07:17:23.704183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.529 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.529 [2024-11-20 07:17:23.764219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.789 [2024-11-20 07:17:23.906624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:20:26.789 [2024-11-20 07:17:23.906690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.789 07:17:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.789 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.789 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:26.789 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:26.789 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:20:26.789 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:20:26.789 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:26.789 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:20:26.789 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.789 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:26.789 BaseBdev2 00:20:26.789 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.789 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:20:26.790 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:26.790 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:26.790 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:26.790 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:26.790 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:26.790 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:26.790 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.790 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.049 [ 00:20:27.049 { 00:20:27.049 "name": "BaseBdev2", 00:20:27.049 "aliases": [ 00:20:27.049 "9d14f037-3353-4a95-9b8b-a9c1fc795886" 00:20:27.049 ], 00:20:27.049 "product_name": "Malloc disk", 00:20:27.049 "block_size": 512, 00:20:27.049 "num_blocks": 65536, 00:20:27.049 "uuid": "9d14f037-3353-4a95-9b8b-a9c1fc795886", 00:20:27.049 "assigned_rate_limits": { 00:20:27.049 "rw_ios_per_sec": 0, 00:20:27.049 "rw_mbytes_per_sec": 0, 00:20:27.049 "r_mbytes_per_sec": 0, 00:20:27.049 "w_mbytes_per_sec": 0 00:20:27.049 }, 00:20:27.049 "claimed": false, 00:20:27.049 "zoned": false, 00:20:27.049 "supported_io_types": { 00:20:27.049 "read": true, 00:20:27.049 "write": true, 00:20:27.049 "unmap": true, 00:20:27.049 "flush": true, 00:20:27.049 "reset": true, 00:20:27.049 "nvme_admin": false, 00:20:27.049 "nvme_io": false, 00:20:27.049 "nvme_io_md": false, 00:20:27.049 "write_zeroes": true, 00:20:27.049 "zcopy": true, 00:20:27.049 "get_zone_info": false, 00:20:27.049 "zone_management": false, 00:20:27.049 "zone_append": false, 00:20:27.049 "compare": false, 00:20:27.049 "compare_and_write": false, 00:20:27.049 "abort": true, 00:20:27.049 "seek_hole": false, 00:20:27.049 "seek_data": false, 00:20:27.049 "copy": true, 00:20:27.049 "nvme_iov_md": false 00:20:27.049 }, 00:20:27.049 "memory_domains": [ 00:20:27.049 { 00:20:27.049 "dma_device_id": "system", 00:20:27.049 "dma_device_type": 1 00:20:27.049 }, 00:20:27.049 { 00:20:27.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.049 "dma_device_type": 2 00:20:27.049 } 00:20:27.049 ], 00:20:27.049 "driver_specific": {} 00:20:27.049 } 00:20:27.049 ] 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.049 BaseBdev3 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.049 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.049 [ 00:20:27.049 { 00:20:27.049 "name": "BaseBdev3", 00:20:27.049 "aliases": [ 00:20:27.049 "61a461a3-8caf-41d9-a192-6233fc83a4d4" 00:20:27.049 ], 00:20:27.049 "product_name": "Malloc disk", 00:20:27.049 "block_size": 512, 00:20:27.049 "num_blocks": 65536, 00:20:27.049 "uuid": "61a461a3-8caf-41d9-a192-6233fc83a4d4", 00:20:27.049 "assigned_rate_limits": { 00:20:27.049 "rw_ios_per_sec": 0, 00:20:27.049 "rw_mbytes_per_sec": 0, 00:20:27.049 "r_mbytes_per_sec": 0, 00:20:27.049 "w_mbytes_per_sec": 0 00:20:27.049 }, 00:20:27.049 "claimed": false, 00:20:27.049 "zoned": false, 00:20:27.049 "supported_io_types": { 00:20:27.049 "read": true, 00:20:27.049 "write": true, 00:20:27.049 "unmap": true, 00:20:27.049 "flush": true, 00:20:27.049 "reset": true, 00:20:27.049 "nvme_admin": false, 00:20:27.049 "nvme_io": false, 00:20:27.049 "nvme_io_md": false, 00:20:27.049 "write_zeroes": true, 00:20:27.049 "zcopy": true, 00:20:27.049 "get_zone_info": false, 00:20:27.049 "zone_management": false, 00:20:27.050 "zone_append": false, 00:20:27.050 "compare": false, 00:20:27.050 "compare_and_write": false, 00:20:27.050 "abort": true, 00:20:27.050 "seek_hole": false, 00:20:27.050 "seek_data": false, 00:20:27.050 "copy": true, 00:20:27.050 "nvme_iov_md": false 00:20:27.050 }, 00:20:27.050 "memory_domains": [ 00:20:27.050 { 00:20:27.050 "dma_device_id": "system", 00:20:27.050 "dma_device_type": 1 00:20:27.050 }, 00:20:27.050 { 00:20:27.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.050 "dma_device_type": 2 00:20:27.050 } 00:20:27.050 ], 00:20:27.050 "driver_specific": {} 00:20:27.050 } 00:20:27.050 ] 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.050 BaseBdev4 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.050 [ 00:20:27.050 { 00:20:27.050 "name": "BaseBdev4", 00:20:27.050 "aliases": [ 00:20:27.050 "1831b87e-ccf8-432a-9597-b90ac7394a07" 00:20:27.050 ], 00:20:27.050 "product_name": "Malloc disk", 00:20:27.050 "block_size": 512, 00:20:27.050 "num_blocks": 65536, 00:20:27.050 "uuid": "1831b87e-ccf8-432a-9597-b90ac7394a07", 00:20:27.050 "assigned_rate_limits": { 00:20:27.050 "rw_ios_per_sec": 0, 00:20:27.050 "rw_mbytes_per_sec": 0, 00:20:27.050 "r_mbytes_per_sec": 0, 00:20:27.050 "w_mbytes_per_sec": 0 00:20:27.050 }, 00:20:27.050 "claimed": false, 00:20:27.050 "zoned": false, 00:20:27.050 "supported_io_types": { 00:20:27.050 "read": true, 00:20:27.050 "write": true, 00:20:27.050 "unmap": true, 00:20:27.050 "flush": true, 00:20:27.050 "reset": true, 00:20:27.050 "nvme_admin": false, 00:20:27.050 "nvme_io": false, 00:20:27.050 "nvme_io_md": false, 00:20:27.050 "write_zeroes": true, 00:20:27.050 "zcopy": true, 00:20:27.050 "get_zone_info": false, 00:20:27.050 "zone_management": false, 00:20:27.050 "zone_append": false, 00:20:27.050 "compare": false, 00:20:27.050 "compare_and_write": false, 00:20:27.050 "abort": true, 00:20:27.050 "seek_hole": false, 00:20:27.050 "seek_data": false, 00:20:27.050 "copy": true, 00:20:27.050 "nvme_iov_md": false 00:20:27.050 }, 00:20:27.050 "memory_domains": [ 00:20:27.050 { 00:20:27.050 "dma_device_id": "system", 00:20:27.050 "dma_device_type": 1 00:20:27.050 }, 00:20:27.050 { 00:20:27.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.050 "dma_device_type": 2 00:20:27.050 } 00:20:27.050 ], 00:20:27.050 "driver_specific": {} 00:20:27.050 } 00:20:27.050 ] 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.050 [2024-11-20 07:17:24.288573] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:27.050 [2024-11-20 07:17:24.288770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:27.050 [2024-11-20 07:17:24.288936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:27.050 [2024-11-20 07:17:24.291449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:27.050 [2024-11-20 07:17:24.291658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.050 "name": "Existed_Raid", 00:20:27.050 "uuid": "4fefa6ca-8db3-4e31-88e1-24d8c49d418b", 00:20:27.050 "strip_size_kb": 64, 00:20:27.050 "state": "configuring", 00:20:27.050 "raid_level": "raid5f", 00:20:27.050 "superblock": true, 00:20:27.050 "num_base_bdevs": 4, 00:20:27.050 "num_base_bdevs_discovered": 3, 00:20:27.050 "num_base_bdevs_operational": 4, 00:20:27.050 "base_bdevs_list": [ 00:20:27.050 { 00:20:27.050 "name": "BaseBdev1", 00:20:27.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.050 "is_configured": false, 00:20:27.050 "data_offset": 0, 00:20:27.050 "data_size": 0 00:20:27.050 }, 00:20:27.050 { 00:20:27.050 "name": "BaseBdev2", 00:20:27.050 "uuid": "9d14f037-3353-4a95-9b8b-a9c1fc795886", 00:20:27.050 "is_configured": true, 00:20:27.050 "data_offset": 2048, 00:20:27.050 "data_size": 63488 00:20:27.050 }, 00:20:27.050 { 00:20:27.050 "name": "BaseBdev3", 00:20:27.050 "uuid": "61a461a3-8caf-41d9-a192-6233fc83a4d4", 00:20:27.050 "is_configured": true, 00:20:27.050 "data_offset": 2048, 00:20:27.050 "data_size": 63488 00:20:27.050 }, 00:20:27.050 { 00:20:27.050 "name": "BaseBdev4", 00:20:27.050 "uuid": "1831b87e-ccf8-432a-9597-b90ac7394a07", 00:20:27.050 "is_configured": true, 00:20:27.050 "data_offset": 2048, 00:20:27.050 "data_size": 63488 00:20:27.050 } 00:20:27.050 ] 00:20:27.050 }' 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.050 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.617 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.618 [2024-11-20 07:17:24.808692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:27.618 "name": "Existed_Raid", 00:20:27.618 "uuid": "4fefa6ca-8db3-4e31-88e1-24d8c49d418b", 00:20:27.618 "strip_size_kb": 64, 00:20:27.618 "state": "configuring", 00:20:27.618 "raid_level": "raid5f", 00:20:27.618 "superblock": true, 00:20:27.618 "num_base_bdevs": 4, 00:20:27.618 "num_base_bdevs_discovered": 2, 00:20:27.618 "num_base_bdevs_operational": 4, 00:20:27.618 "base_bdevs_list": [ 00:20:27.618 { 00:20:27.618 "name": "BaseBdev1", 00:20:27.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:27.618 "is_configured": false, 00:20:27.618 "data_offset": 0, 00:20:27.618 "data_size": 0 00:20:27.618 }, 00:20:27.618 { 00:20:27.618 "name": null, 00:20:27.618 "uuid": "9d14f037-3353-4a95-9b8b-a9c1fc795886", 00:20:27.618 "is_configured": false, 00:20:27.618 "data_offset": 0, 00:20:27.618 "data_size": 63488 00:20:27.618 }, 00:20:27.618 { 00:20:27.618 "name": "BaseBdev3", 00:20:27.618 "uuid": "61a461a3-8caf-41d9-a192-6233fc83a4d4", 00:20:27.618 "is_configured": true, 00:20:27.618 "data_offset": 2048, 00:20:27.618 "data_size": 63488 00:20:27.618 }, 00:20:27.618 { 00:20:27.618 "name": "BaseBdev4", 00:20:27.618 "uuid": "1831b87e-ccf8-432a-9597-b90ac7394a07", 00:20:27.618 "is_configured": true, 00:20:27.618 "data_offset": 2048, 00:20:27.618 "data_size": 63488 00:20:27.618 } 00:20:27.618 ] 00:20:27.618 }' 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:27.618 07:17:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.186 [2024-11-20 07:17:25.459847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.186 BaseBdev1 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.186 [ 00:20:28.186 { 00:20:28.186 "name": "BaseBdev1", 00:20:28.186 "aliases": [ 00:20:28.186 "8b48344c-4208-4239-b10a-b541bf1f4f8a" 00:20:28.186 ], 00:20:28.186 "product_name": "Malloc disk", 00:20:28.186 "block_size": 512, 00:20:28.186 "num_blocks": 65536, 00:20:28.186 "uuid": "8b48344c-4208-4239-b10a-b541bf1f4f8a", 00:20:28.186 "assigned_rate_limits": { 00:20:28.186 "rw_ios_per_sec": 0, 00:20:28.186 "rw_mbytes_per_sec": 0, 00:20:28.186 "r_mbytes_per_sec": 0, 00:20:28.186 "w_mbytes_per_sec": 0 00:20:28.186 }, 00:20:28.186 "claimed": true, 00:20:28.186 "claim_type": "exclusive_write", 00:20:28.186 "zoned": false, 00:20:28.186 "supported_io_types": { 00:20:28.186 "read": true, 00:20:28.186 "write": true, 00:20:28.186 "unmap": true, 00:20:28.186 "flush": true, 00:20:28.186 "reset": true, 00:20:28.186 "nvme_admin": false, 00:20:28.186 "nvme_io": false, 00:20:28.186 "nvme_io_md": false, 00:20:28.186 "write_zeroes": true, 00:20:28.186 "zcopy": true, 00:20:28.186 "get_zone_info": false, 00:20:28.186 "zone_management": false, 00:20:28.186 "zone_append": false, 00:20:28.186 "compare": false, 00:20:28.186 "compare_and_write": false, 00:20:28.186 "abort": true, 00:20:28.186 "seek_hole": false, 00:20:28.186 "seek_data": false, 00:20:28.186 "copy": true, 00:20:28.186 "nvme_iov_md": false 00:20:28.186 }, 00:20:28.186 "memory_domains": [ 00:20:28.186 { 00:20:28.186 "dma_device_id": "system", 00:20:28.186 "dma_device_type": 1 00:20:28.186 }, 00:20:28.186 { 00:20:28.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.186 "dma_device_type": 2 00:20:28.186 } 00:20:28.186 ], 00:20:28.186 "driver_specific": {} 00:20:28.186 } 00:20:28.186 ] 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.186 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.445 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.445 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.445 "name": "Existed_Raid", 00:20:28.445 "uuid": "4fefa6ca-8db3-4e31-88e1-24d8c49d418b", 00:20:28.445 "strip_size_kb": 64, 00:20:28.445 "state": "configuring", 00:20:28.445 "raid_level": "raid5f", 00:20:28.445 "superblock": true, 00:20:28.445 "num_base_bdevs": 4, 00:20:28.445 "num_base_bdevs_discovered": 3, 00:20:28.445 "num_base_bdevs_operational": 4, 00:20:28.445 "base_bdevs_list": [ 00:20:28.445 { 00:20:28.445 "name": "BaseBdev1", 00:20:28.445 "uuid": "8b48344c-4208-4239-b10a-b541bf1f4f8a", 00:20:28.445 "is_configured": true, 00:20:28.445 "data_offset": 2048, 00:20:28.445 "data_size": 63488 00:20:28.445 }, 00:20:28.445 { 00:20:28.445 "name": null, 00:20:28.445 "uuid": "9d14f037-3353-4a95-9b8b-a9c1fc795886", 00:20:28.445 "is_configured": false, 00:20:28.445 "data_offset": 0, 00:20:28.445 "data_size": 63488 00:20:28.445 }, 00:20:28.445 { 00:20:28.445 "name": "BaseBdev3", 00:20:28.445 "uuid": "61a461a3-8caf-41d9-a192-6233fc83a4d4", 00:20:28.445 "is_configured": true, 00:20:28.445 "data_offset": 2048, 00:20:28.445 "data_size": 63488 00:20:28.445 }, 00:20:28.445 { 00:20:28.445 "name": "BaseBdev4", 00:20:28.445 "uuid": "1831b87e-ccf8-432a-9597-b90ac7394a07", 00:20:28.445 "is_configured": true, 00:20:28.445 "data_offset": 2048, 00:20:28.445 "data_size": 63488 00:20:28.445 } 00:20:28.445 ] 00:20:28.445 }' 00:20:28.445 07:17:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.445 07:17:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.703 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.703 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.703 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.703 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.962 [2024-11-20 07:17:26.064099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:28.962 "name": "Existed_Raid", 00:20:28.962 "uuid": "4fefa6ca-8db3-4e31-88e1-24d8c49d418b", 00:20:28.962 "strip_size_kb": 64, 00:20:28.962 "state": "configuring", 00:20:28.962 "raid_level": "raid5f", 00:20:28.962 "superblock": true, 00:20:28.962 "num_base_bdevs": 4, 00:20:28.962 "num_base_bdevs_discovered": 2, 00:20:28.962 "num_base_bdevs_operational": 4, 00:20:28.962 "base_bdevs_list": [ 00:20:28.962 { 00:20:28.962 "name": "BaseBdev1", 00:20:28.962 "uuid": "8b48344c-4208-4239-b10a-b541bf1f4f8a", 00:20:28.962 "is_configured": true, 00:20:28.962 "data_offset": 2048, 00:20:28.962 "data_size": 63488 00:20:28.962 }, 00:20:28.962 { 00:20:28.962 "name": null, 00:20:28.962 "uuid": "9d14f037-3353-4a95-9b8b-a9c1fc795886", 00:20:28.962 "is_configured": false, 00:20:28.962 "data_offset": 0, 00:20:28.962 "data_size": 63488 00:20:28.962 }, 00:20:28.962 { 00:20:28.962 "name": null, 00:20:28.962 "uuid": "61a461a3-8caf-41d9-a192-6233fc83a4d4", 00:20:28.962 "is_configured": false, 00:20:28.962 "data_offset": 0, 00:20:28.962 "data_size": 63488 00:20:28.962 }, 00:20:28.962 { 00:20:28.962 "name": "BaseBdev4", 00:20:28.962 "uuid": "1831b87e-ccf8-432a-9597-b90ac7394a07", 00:20:28.962 "is_configured": true, 00:20:28.962 "data_offset": 2048, 00:20:28.962 "data_size": 63488 00:20:28.962 } 00:20:28.962 ] 00:20:28.962 }' 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:28.962 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.530 [2024-11-20 07:17:26.676281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:29.530 "name": "Existed_Raid", 00:20:29.530 "uuid": "4fefa6ca-8db3-4e31-88e1-24d8c49d418b", 00:20:29.530 "strip_size_kb": 64, 00:20:29.530 "state": "configuring", 00:20:29.530 "raid_level": "raid5f", 00:20:29.530 "superblock": true, 00:20:29.530 "num_base_bdevs": 4, 00:20:29.530 "num_base_bdevs_discovered": 3, 00:20:29.530 "num_base_bdevs_operational": 4, 00:20:29.530 "base_bdevs_list": [ 00:20:29.530 { 00:20:29.530 "name": "BaseBdev1", 00:20:29.530 "uuid": "8b48344c-4208-4239-b10a-b541bf1f4f8a", 00:20:29.530 "is_configured": true, 00:20:29.530 "data_offset": 2048, 00:20:29.530 "data_size": 63488 00:20:29.530 }, 00:20:29.530 { 00:20:29.530 "name": null, 00:20:29.530 "uuid": "9d14f037-3353-4a95-9b8b-a9c1fc795886", 00:20:29.530 "is_configured": false, 00:20:29.530 "data_offset": 0, 00:20:29.530 "data_size": 63488 00:20:29.530 }, 00:20:29.530 { 00:20:29.530 "name": "BaseBdev3", 00:20:29.530 "uuid": "61a461a3-8caf-41d9-a192-6233fc83a4d4", 00:20:29.530 "is_configured": true, 00:20:29.530 "data_offset": 2048, 00:20:29.530 "data_size": 63488 00:20:29.530 }, 00:20:29.530 { 00:20:29.530 "name": "BaseBdev4", 00:20:29.530 "uuid": "1831b87e-ccf8-432a-9597-b90ac7394a07", 00:20:29.530 "is_configured": true, 00:20:29.530 "data_offset": 2048, 00:20:29.530 "data_size": 63488 00:20:29.530 } 00:20:29.530 ] 00:20:29.530 }' 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:29.530 07:17:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.097 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.098 [2024-11-20 07:17:27.236550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.098 "name": "Existed_Raid", 00:20:30.098 "uuid": "4fefa6ca-8db3-4e31-88e1-24d8c49d418b", 00:20:30.098 "strip_size_kb": 64, 00:20:30.098 "state": "configuring", 00:20:30.098 "raid_level": "raid5f", 00:20:30.098 "superblock": true, 00:20:30.098 "num_base_bdevs": 4, 00:20:30.098 "num_base_bdevs_discovered": 2, 00:20:30.098 "num_base_bdevs_operational": 4, 00:20:30.098 "base_bdevs_list": [ 00:20:30.098 { 00:20:30.098 "name": null, 00:20:30.098 "uuid": "8b48344c-4208-4239-b10a-b541bf1f4f8a", 00:20:30.098 "is_configured": false, 00:20:30.098 "data_offset": 0, 00:20:30.098 "data_size": 63488 00:20:30.098 }, 00:20:30.098 { 00:20:30.098 "name": null, 00:20:30.098 "uuid": "9d14f037-3353-4a95-9b8b-a9c1fc795886", 00:20:30.098 "is_configured": false, 00:20:30.098 "data_offset": 0, 00:20:30.098 "data_size": 63488 00:20:30.098 }, 00:20:30.098 { 00:20:30.098 "name": "BaseBdev3", 00:20:30.098 "uuid": "61a461a3-8caf-41d9-a192-6233fc83a4d4", 00:20:30.098 "is_configured": true, 00:20:30.098 "data_offset": 2048, 00:20:30.098 "data_size": 63488 00:20:30.098 }, 00:20:30.098 { 00:20:30.098 "name": "BaseBdev4", 00:20:30.098 "uuid": "1831b87e-ccf8-432a-9597-b90ac7394a07", 00:20:30.098 "is_configured": true, 00:20:30.098 "data_offset": 2048, 00:20:30.098 "data_size": 63488 00:20:30.098 } 00:20:30.098 ] 00:20:30.098 }' 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.098 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.665 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.665 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:30.665 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.665 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.665 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.666 [2024-11-20 07:17:27.933027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.666 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.925 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.925 "name": "Existed_Raid", 00:20:30.925 "uuid": "4fefa6ca-8db3-4e31-88e1-24d8c49d418b", 00:20:30.925 "strip_size_kb": 64, 00:20:30.925 "state": "configuring", 00:20:30.925 "raid_level": "raid5f", 00:20:30.925 "superblock": true, 00:20:30.925 "num_base_bdevs": 4, 00:20:30.925 "num_base_bdevs_discovered": 3, 00:20:30.925 "num_base_bdevs_operational": 4, 00:20:30.925 "base_bdevs_list": [ 00:20:30.925 { 00:20:30.925 "name": null, 00:20:30.925 "uuid": "8b48344c-4208-4239-b10a-b541bf1f4f8a", 00:20:30.925 "is_configured": false, 00:20:30.925 "data_offset": 0, 00:20:30.925 "data_size": 63488 00:20:30.925 }, 00:20:30.925 { 00:20:30.925 "name": "BaseBdev2", 00:20:30.925 "uuid": "9d14f037-3353-4a95-9b8b-a9c1fc795886", 00:20:30.925 "is_configured": true, 00:20:30.925 "data_offset": 2048, 00:20:30.925 "data_size": 63488 00:20:30.925 }, 00:20:30.925 { 00:20:30.925 "name": "BaseBdev3", 00:20:30.925 "uuid": "61a461a3-8caf-41d9-a192-6233fc83a4d4", 00:20:30.925 "is_configured": true, 00:20:30.925 "data_offset": 2048, 00:20:30.925 "data_size": 63488 00:20:30.925 }, 00:20:30.925 { 00:20:30.925 "name": "BaseBdev4", 00:20:30.925 "uuid": "1831b87e-ccf8-432a-9597-b90ac7394a07", 00:20:30.925 "is_configured": true, 00:20:30.925 "data_offset": 2048, 00:20:30.925 "data_size": 63488 00:20:30.925 } 00:20:30.925 ] 00:20:30.925 }' 00:20:30.925 07:17:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.925 07:17:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.183 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:31.183 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.183 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.183 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.183 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8b48344c-4208-4239-b10a-b541bf1f4f8a 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.442 [2024-11-20 07:17:28.614919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:31.442 [2024-11-20 07:17:28.615290] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:31.442 [2024-11-20 07:17:28.615309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:31.442 NewBaseBdev 00:20:31.442 [2024-11-20 07:17:28.615626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.442 [2024-11-20 07:17:28.622016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:31.442 [2024-11-20 07:17:28.622047] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:20:31.442 [2024-11-20 07:17:28.622338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.442 [ 00:20:31.442 { 00:20:31.442 "name": "NewBaseBdev", 00:20:31.442 "aliases": [ 00:20:31.442 "8b48344c-4208-4239-b10a-b541bf1f4f8a" 00:20:31.442 ], 00:20:31.442 "product_name": "Malloc disk", 00:20:31.442 "block_size": 512, 00:20:31.442 "num_blocks": 65536, 00:20:31.442 "uuid": "8b48344c-4208-4239-b10a-b541bf1f4f8a", 00:20:31.442 "assigned_rate_limits": { 00:20:31.442 "rw_ios_per_sec": 0, 00:20:31.442 "rw_mbytes_per_sec": 0, 00:20:31.442 "r_mbytes_per_sec": 0, 00:20:31.442 "w_mbytes_per_sec": 0 00:20:31.442 }, 00:20:31.442 "claimed": true, 00:20:31.442 "claim_type": "exclusive_write", 00:20:31.442 "zoned": false, 00:20:31.442 "supported_io_types": { 00:20:31.442 "read": true, 00:20:31.442 "write": true, 00:20:31.442 "unmap": true, 00:20:31.442 "flush": true, 00:20:31.442 "reset": true, 00:20:31.442 "nvme_admin": false, 00:20:31.442 "nvme_io": false, 00:20:31.442 "nvme_io_md": false, 00:20:31.442 "write_zeroes": true, 00:20:31.442 "zcopy": true, 00:20:31.442 "get_zone_info": false, 00:20:31.442 "zone_management": false, 00:20:31.442 "zone_append": false, 00:20:31.442 "compare": false, 00:20:31.442 "compare_and_write": false, 00:20:31.442 "abort": true, 00:20:31.442 "seek_hole": false, 00:20:31.442 "seek_data": false, 00:20:31.442 "copy": true, 00:20:31.442 "nvme_iov_md": false 00:20:31.442 }, 00:20:31.442 "memory_domains": [ 00:20:31.442 { 00:20:31.442 "dma_device_id": "system", 00:20:31.442 "dma_device_type": 1 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.442 "dma_device_type": 2 00:20:31.442 } 00:20:31.442 ], 00:20:31.442 "driver_specific": {} 00:20:31.442 } 00:20:31.442 ] 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:31.442 "name": "Existed_Raid", 00:20:31.442 "uuid": "4fefa6ca-8db3-4e31-88e1-24d8c49d418b", 00:20:31.442 "strip_size_kb": 64, 00:20:31.442 "state": "online", 00:20:31.442 "raid_level": "raid5f", 00:20:31.442 "superblock": true, 00:20:31.442 "num_base_bdevs": 4, 00:20:31.442 "num_base_bdevs_discovered": 4, 00:20:31.442 "num_base_bdevs_operational": 4, 00:20:31.442 "base_bdevs_list": [ 00:20:31.442 { 00:20:31.442 "name": "NewBaseBdev", 00:20:31.442 "uuid": "8b48344c-4208-4239-b10a-b541bf1f4f8a", 00:20:31.442 "is_configured": true, 00:20:31.442 "data_offset": 2048, 00:20:31.442 "data_size": 63488 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "name": "BaseBdev2", 00:20:31.442 "uuid": "9d14f037-3353-4a95-9b8b-a9c1fc795886", 00:20:31.442 "is_configured": true, 00:20:31.442 "data_offset": 2048, 00:20:31.442 "data_size": 63488 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "name": "BaseBdev3", 00:20:31.442 "uuid": "61a461a3-8caf-41d9-a192-6233fc83a4d4", 00:20:31.442 "is_configured": true, 00:20:31.442 "data_offset": 2048, 00:20:31.442 "data_size": 63488 00:20:31.442 }, 00:20:31.442 { 00:20:31.442 "name": "BaseBdev4", 00:20:31.442 "uuid": "1831b87e-ccf8-432a-9597-b90ac7394a07", 00:20:31.442 "is_configured": true, 00:20:31.442 "data_offset": 2048, 00:20:31.442 "data_size": 63488 00:20:31.442 } 00:20:31.442 ] 00:20:31.442 }' 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:31.442 07:17:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.013 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:20:32.013 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:32.013 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:32.013 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:32.013 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:20:32.013 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:32.013 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:32.013 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:32.013 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.013 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.013 [2024-11-20 07:17:29.174028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:32.013 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.013 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:32.013 "name": "Existed_Raid", 00:20:32.013 "aliases": [ 00:20:32.013 "4fefa6ca-8db3-4e31-88e1-24d8c49d418b" 00:20:32.013 ], 00:20:32.013 "product_name": "Raid Volume", 00:20:32.013 "block_size": 512, 00:20:32.013 "num_blocks": 190464, 00:20:32.013 "uuid": "4fefa6ca-8db3-4e31-88e1-24d8c49d418b", 00:20:32.013 "assigned_rate_limits": { 00:20:32.013 "rw_ios_per_sec": 0, 00:20:32.013 "rw_mbytes_per_sec": 0, 00:20:32.013 "r_mbytes_per_sec": 0, 00:20:32.014 "w_mbytes_per_sec": 0 00:20:32.014 }, 00:20:32.014 "claimed": false, 00:20:32.014 "zoned": false, 00:20:32.014 "supported_io_types": { 00:20:32.014 "read": true, 00:20:32.014 "write": true, 00:20:32.014 "unmap": false, 00:20:32.014 "flush": false, 00:20:32.014 "reset": true, 00:20:32.014 "nvme_admin": false, 00:20:32.014 "nvme_io": false, 00:20:32.014 "nvme_io_md": false, 00:20:32.014 "write_zeroes": true, 00:20:32.014 "zcopy": false, 00:20:32.014 "get_zone_info": false, 00:20:32.014 "zone_management": false, 00:20:32.014 "zone_append": false, 00:20:32.014 "compare": false, 00:20:32.014 "compare_and_write": false, 00:20:32.014 "abort": false, 00:20:32.014 "seek_hole": false, 00:20:32.014 "seek_data": false, 00:20:32.014 "copy": false, 00:20:32.014 "nvme_iov_md": false 00:20:32.014 }, 00:20:32.014 "driver_specific": { 00:20:32.014 "raid": { 00:20:32.014 "uuid": "4fefa6ca-8db3-4e31-88e1-24d8c49d418b", 00:20:32.014 "strip_size_kb": 64, 00:20:32.014 "state": "online", 00:20:32.014 "raid_level": "raid5f", 00:20:32.014 "superblock": true, 00:20:32.014 "num_base_bdevs": 4, 00:20:32.014 "num_base_bdevs_discovered": 4, 00:20:32.014 "num_base_bdevs_operational": 4, 00:20:32.014 "base_bdevs_list": [ 00:20:32.014 { 00:20:32.014 "name": "NewBaseBdev", 00:20:32.014 "uuid": "8b48344c-4208-4239-b10a-b541bf1f4f8a", 00:20:32.014 "is_configured": true, 00:20:32.014 "data_offset": 2048, 00:20:32.014 "data_size": 63488 00:20:32.014 }, 00:20:32.014 { 00:20:32.014 "name": "BaseBdev2", 00:20:32.014 "uuid": "9d14f037-3353-4a95-9b8b-a9c1fc795886", 00:20:32.014 "is_configured": true, 00:20:32.014 "data_offset": 2048, 00:20:32.014 "data_size": 63488 00:20:32.014 }, 00:20:32.014 { 00:20:32.014 "name": "BaseBdev3", 00:20:32.014 "uuid": "61a461a3-8caf-41d9-a192-6233fc83a4d4", 00:20:32.014 "is_configured": true, 00:20:32.014 "data_offset": 2048, 00:20:32.014 "data_size": 63488 00:20:32.014 }, 00:20:32.014 { 00:20:32.014 "name": "BaseBdev4", 00:20:32.014 "uuid": "1831b87e-ccf8-432a-9597-b90ac7394a07", 00:20:32.014 "is_configured": true, 00:20:32.014 "data_offset": 2048, 00:20:32.014 "data_size": 63488 00:20:32.014 } 00:20:32.014 ] 00:20:32.014 } 00:20:32.014 } 00:20:32.014 }' 00:20:32.014 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:32.014 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:20:32.014 BaseBdev2 00:20:32.014 BaseBdev3 00:20:32.014 BaseBdev4' 00:20:32.014 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:32.014 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:32.014 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:32.014 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:20:32.014 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:32.014 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.014 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:32.272 [2024-11-20 07:17:29.541794] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:32.272 [2024-11-20 07:17:29.541834] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:32.272 [2024-11-20 07:17:29.541962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:32.272 [2024-11-20 07:17:29.542362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:32.272 [2024-11-20 07:17:29.542382] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83758 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83758 ']' 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83758 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83758 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.272 killing process with pid 83758 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83758' 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83758 00:20:32.272 [2024-11-20 07:17:29.582810] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:32.272 07:17:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83758 00:20:32.839 [2024-11-20 07:17:29.948107] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:33.829 07:17:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:20:33.829 00:20:33.829 real 0m13.012s 00:20:33.829 user 0m21.512s 00:20:33.829 sys 0m1.821s 00:20:33.829 07:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:33.829 07:17:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.829 ************************************ 00:20:33.829 END TEST raid5f_state_function_test_sb 00:20:33.829 ************************************ 00:20:33.829 07:17:31 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:20:33.829 07:17:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:33.829 07:17:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:33.829 07:17:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:33.829 ************************************ 00:20:33.829 START TEST raid5f_superblock_test 00:20:33.829 ************************************ 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84441 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84441 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84441 ']' 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.829 07:17:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.089 [2024-11-20 07:17:31.184562] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:20:34.089 [2024-11-20 07:17:31.184727] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84441 ] 00:20:34.089 [2024-11-20 07:17:31.359881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.350 [2024-11-20 07:17:31.497332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.608 [2024-11-20 07:17:31.711859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:34.608 [2024-11-20 07:17:31.711945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.176 malloc1 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.176 [2024-11-20 07:17:32.276383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:35.176 [2024-11-20 07:17:32.276617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.176 [2024-11-20 07:17:32.276699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:35.176 [2024-11-20 07:17:32.276976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.176 [2024-11-20 07:17:32.279820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.176 [2024-11-20 07:17:32.280006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:35.176 pt1 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.176 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.177 malloc2 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.177 [2024-11-20 07:17:32.333397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:35.177 [2024-11-20 07:17:32.333472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.177 [2024-11-20 07:17:32.333504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:35.177 [2024-11-20 07:17:32.333529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.177 [2024-11-20 07:17:32.336359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.177 [2024-11-20 07:17:32.336408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:35.177 pt2 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.177 malloc3 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.177 [2024-11-20 07:17:32.395684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:35.177 [2024-11-20 07:17:32.395886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.177 [2024-11-20 07:17:32.395934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:35.177 [2024-11-20 07:17:32.395950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.177 [2024-11-20 07:17:32.398738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.177 [2024-11-20 07:17:32.398786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:35.177 pt3 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.177 malloc4 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.177 [2024-11-20 07:17:32.454532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:35.177 [2024-11-20 07:17:32.454616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.177 [2024-11-20 07:17:32.454650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:35.177 [2024-11-20 07:17:32.454664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.177 [2024-11-20 07:17:32.457662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.177 [2024-11-20 07:17:32.457712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:35.177 pt4 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.177 [2024-11-20 07:17:32.462620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:35.177 [2024-11-20 07:17:32.465178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:35.177 [2024-11-20 07:17:32.465455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:35.177 [2024-11-20 07:17:32.465568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:35.177 [2024-11-20 07:17:32.465842] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:35.177 [2024-11-20 07:17:32.465891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:35.177 [2024-11-20 07:17:32.466236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:35.177 [2024-11-20 07:17:32.473059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:35.177 [2024-11-20 07:17:32.473090] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:35.177 [2024-11-20 07:17:32.473416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.177 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.436 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.436 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:35.436 "name": "raid_bdev1", 00:20:35.436 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:35.436 "strip_size_kb": 64, 00:20:35.436 "state": "online", 00:20:35.436 "raid_level": "raid5f", 00:20:35.436 "superblock": true, 00:20:35.436 "num_base_bdevs": 4, 00:20:35.436 "num_base_bdevs_discovered": 4, 00:20:35.436 "num_base_bdevs_operational": 4, 00:20:35.436 "base_bdevs_list": [ 00:20:35.436 { 00:20:35.436 "name": "pt1", 00:20:35.436 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:35.436 "is_configured": true, 00:20:35.436 "data_offset": 2048, 00:20:35.436 "data_size": 63488 00:20:35.436 }, 00:20:35.436 { 00:20:35.436 "name": "pt2", 00:20:35.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:35.436 "is_configured": true, 00:20:35.436 "data_offset": 2048, 00:20:35.436 "data_size": 63488 00:20:35.436 }, 00:20:35.436 { 00:20:35.436 "name": "pt3", 00:20:35.436 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:35.436 "is_configured": true, 00:20:35.436 "data_offset": 2048, 00:20:35.436 "data_size": 63488 00:20:35.436 }, 00:20:35.436 { 00:20:35.436 "name": "pt4", 00:20:35.436 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:35.436 "is_configured": true, 00:20:35.436 "data_offset": 2048, 00:20:35.436 "data_size": 63488 00:20:35.436 } 00:20:35.436 ] 00:20:35.436 }' 00:20:35.436 07:17:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:35.436 07:17:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.004 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:36.004 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:36.004 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:36.004 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:36.004 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:36.004 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:36.004 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:36.004 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.005 [2024-11-20 07:17:33.045477] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:36.005 "name": "raid_bdev1", 00:20:36.005 "aliases": [ 00:20:36.005 "03652127-7c41-403d-87f6-f1be8f9fde0e" 00:20:36.005 ], 00:20:36.005 "product_name": "Raid Volume", 00:20:36.005 "block_size": 512, 00:20:36.005 "num_blocks": 190464, 00:20:36.005 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:36.005 "assigned_rate_limits": { 00:20:36.005 "rw_ios_per_sec": 0, 00:20:36.005 "rw_mbytes_per_sec": 0, 00:20:36.005 "r_mbytes_per_sec": 0, 00:20:36.005 "w_mbytes_per_sec": 0 00:20:36.005 }, 00:20:36.005 "claimed": false, 00:20:36.005 "zoned": false, 00:20:36.005 "supported_io_types": { 00:20:36.005 "read": true, 00:20:36.005 "write": true, 00:20:36.005 "unmap": false, 00:20:36.005 "flush": false, 00:20:36.005 "reset": true, 00:20:36.005 "nvme_admin": false, 00:20:36.005 "nvme_io": false, 00:20:36.005 "nvme_io_md": false, 00:20:36.005 "write_zeroes": true, 00:20:36.005 "zcopy": false, 00:20:36.005 "get_zone_info": false, 00:20:36.005 "zone_management": false, 00:20:36.005 "zone_append": false, 00:20:36.005 "compare": false, 00:20:36.005 "compare_and_write": false, 00:20:36.005 "abort": false, 00:20:36.005 "seek_hole": false, 00:20:36.005 "seek_data": false, 00:20:36.005 "copy": false, 00:20:36.005 "nvme_iov_md": false 00:20:36.005 }, 00:20:36.005 "driver_specific": { 00:20:36.005 "raid": { 00:20:36.005 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:36.005 "strip_size_kb": 64, 00:20:36.005 "state": "online", 00:20:36.005 "raid_level": "raid5f", 00:20:36.005 "superblock": true, 00:20:36.005 "num_base_bdevs": 4, 00:20:36.005 "num_base_bdevs_discovered": 4, 00:20:36.005 "num_base_bdevs_operational": 4, 00:20:36.005 "base_bdevs_list": [ 00:20:36.005 { 00:20:36.005 "name": "pt1", 00:20:36.005 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:36.005 "is_configured": true, 00:20:36.005 "data_offset": 2048, 00:20:36.005 "data_size": 63488 00:20:36.005 }, 00:20:36.005 { 00:20:36.005 "name": "pt2", 00:20:36.005 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.005 "is_configured": true, 00:20:36.005 "data_offset": 2048, 00:20:36.005 "data_size": 63488 00:20:36.005 }, 00:20:36.005 { 00:20:36.005 "name": "pt3", 00:20:36.005 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:36.005 "is_configured": true, 00:20:36.005 "data_offset": 2048, 00:20:36.005 "data_size": 63488 00:20:36.005 }, 00:20:36.005 { 00:20:36.005 "name": "pt4", 00:20:36.005 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:36.005 "is_configured": true, 00:20:36.005 "data_offset": 2048, 00:20:36.005 "data_size": 63488 00:20:36.005 } 00:20:36.005 ] 00:20:36.005 } 00:20:36.005 } 00:20:36.005 }' 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:36.005 pt2 00:20:36.005 pt3 00:20:36.005 pt4' 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.005 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 [2024-11-20 07:17:33.441506] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=03652127-7c41-403d-87f6-f1be8f9fde0e 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 03652127-7c41-403d-87f6-f1be8f9fde0e ']' 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 [2024-11-20 07:17:33.493307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:36.264 [2024-11-20 07:17:33.493340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:36.264 [2024-11-20 07:17:33.493445] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:36.264 [2024-11-20 07:17:33.493554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:36.264 [2024-11-20 07:17:33.493578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.264 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.265 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:20:36.525 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.526 [2024-11-20 07:17:33.681385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:36.526 [2024-11-20 07:17:33.683940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:36.526 [2024-11-20 07:17:33.684156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:36.526 [2024-11-20 07:17:33.684246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:20:36.526 [2024-11-20 07:17:33.684326] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:36.526 [2024-11-20 07:17:33.684395] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:36.526 [2024-11-20 07:17:33.684428] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:36.526 [2024-11-20 07:17:33.684458] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:20:36.526 [2024-11-20 07:17:33.684480] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:36.526 [2024-11-20 07:17:33.684496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:36.526 request: 00:20:36.526 { 00:20:36.526 "name": "raid_bdev1", 00:20:36.526 "raid_level": "raid5f", 00:20:36.526 "base_bdevs": [ 00:20:36.526 "malloc1", 00:20:36.526 "malloc2", 00:20:36.526 "malloc3", 00:20:36.526 "malloc4" 00:20:36.526 ], 00:20:36.526 "strip_size_kb": 64, 00:20:36.526 "superblock": false, 00:20:36.526 "method": "bdev_raid_create", 00:20:36.526 "req_id": 1 00:20:36.526 } 00:20:36.526 Got JSON-RPC error response 00:20:36.526 response: 00:20:36.526 { 00:20:36.526 "code": -17, 00:20:36.526 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:36.526 } 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.526 [2024-11-20 07:17:33.749377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:36.526 [2024-11-20 07:17:33.749589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:36.526 [2024-11-20 07:17:33.749661] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:36.526 [2024-11-20 07:17:33.749786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:36.526 [2024-11-20 07:17:33.752695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:36.526 [2024-11-20 07:17:33.752879] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:36.526 [2024-11-20 07:17:33.753131] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:36.526 [2024-11-20 07:17:33.753318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:36.526 pt1 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.526 "name": "raid_bdev1", 00:20:36.526 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:36.526 "strip_size_kb": 64, 00:20:36.526 "state": "configuring", 00:20:36.526 "raid_level": "raid5f", 00:20:36.526 "superblock": true, 00:20:36.526 "num_base_bdevs": 4, 00:20:36.526 "num_base_bdevs_discovered": 1, 00:20:36.526 "num_base_bdevs_operational": 4, 00:20:36.526 "base_bdevs_list": [ 00:20:36.526 { 00:20:36.526 "name": "pt1", 00:20:36.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:36.526 "is_configured": true, 00:20:36.526 "data_offset": 2048, 00:20:36.526 "data_size": 63488 00:20:36.526 }, 00:20:36.526 { 00:20:36.526 "name": null, 00:20:36.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:36.526 "is_configured": false, 00:20:36.526 "data_offset": 2048, 00:20:36.526 "data_size": 63488 00:20:36.526 }, 00:20:36.526 { 00:20:36.526 "name": null, 00:20:36.526 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:36.526 "is_configured": false, 00:20:36.526 "data_offset": 2048, 00:20:36.526 "data_size": 63488 00:20:36.526 }, 00:20:36.526 { 00:20:36.526 "name": null, 00:20:36.526 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:36.526 "is_configured": false, 00:20:36.526 "data_offset": 2048, 00:20:36.526 "data_size": 63488 00:20:36.526 } 00:20:36.526 ] 00:20:36.526 }' 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.526 07:17:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.096 [2024-11-20 07:17:34.281922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:37.096 [2024-11-20 07:17:34.282013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.096 [2024-11-20 07:17:34.282042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:37.096 [2024-11-20 07:17:34.282060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.096 [2024-11-20 07:17:34.282610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.096 [2024-11-20 07:17:34.282654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:37.096 [2024-11-20 07:17:34.282754] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:37.096 [2024-11-20 07:17:34.282792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:37.096 pt2 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.096 [2024-11-20 07:17:34.289911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.096 "name": "raid_bdev1", 00:20:37.096 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:37.096 "strip_size_kb": 64, 00:20:37.096 "state": "configuring", 00:20:37.096 "raid_level": "raid5f", 00:20:37.096 "superblock": true, 00:20:37.096 "num_base_bdevs": 4, 00:20:37.096 "num_base_bdevs_discovered": 1, 00:20:37.096 "num_base_bdevs_operational": 4, 00:20:37.096 "base_bdevs_list": [ 00:20:37.096 { 00:20:37.096 "name": "pt1", 00:20:37.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:37.096 "is_configured": true, 00:20:37.096 "data_offset": 2048, 00:20:37.096 "data_size": 63488 00:20:37.096 }, 00:20:37.096 { 00:20:37.096 "name": null, 00:20:37.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:37.096 "is_configured": false, 00:20:37.096 "data_offset": 0, 00:20:37.096 "data_size": 63488 00:20:37.096 }, 00:20:37.096 { 00:20:37.096 "name": null, 00:20:37.096 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:37.096 "is_configured": false, 00:20:37.096 "data_offset": 2048, 00:20:37.096 "data_size": 63488 00:20:37.096 }, 00:20:37.096 { 00:20:37.096 "name": null, 00:20:37.096 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:37.096 "is_configured": false, 00:20:37.096 "data_offset": 2048, 00:20:37.096 "data_size": 63488 00:20:37.096 } 00:20:37.096 ] 00:20:37.096 }' 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.096 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.663 [2024-11-20 07:17:34.882187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:37.663 [2024-11-20 07:17:34.882266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.663 [2024-11-20 07:17:34.882299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:37.663 [2024-11-20 07:17:34.882313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.663 [2024-11-20 07:17:34.882932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.663 [2024-11-20 07:17:34.882962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:37.663 [2024-11-20 07:17:34.883066] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:37.663 [2024-11-20 07:17:34.883097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:37.663 pt2 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.663 [2024-11-20 07:17:34.890110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:37.663 [2024-11-20 07:17:34.890169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.663 [2024-11-20 07:17:34.890196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:37.663 [2024-11-20 07:17:34.890208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.663 [2024-11-20 07:17:34.890651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.663 [2024-11-20 07:17:34.890691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:37.663 [2024-11-20 07:17:34.890785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:37.663 [2024-11-20 07:17:34.890812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:37.663 pt3 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.663 [2024-11-20 07:17:34.898081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:37.663 [2024-11-20 07:17:34.898142] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.663 [2024-11-20 07:17:34.898172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:37.663 [2024-11-20 07:17:34.898186] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.663 [2024-11-20 07:17:34.898655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.663 [2024-11-20 07:17:34.898687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:37.663 [2024-11-20 07:17:34.898768] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:37.663 [2024-11-20 07:17:34.898795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:37.663 [2024-11-20 07:17:34.898989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:37.663 [2024-11-20 07:17:34.899006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:37.663 [2024-11-20 07:17:34.899302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:37.663 [2024-11-20 07:17:34.905689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:37.663 pt4 00:20:37.663 [2024-11-20 07:17:34.905888] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:37.663 [2024-11-20 07:17:34.906129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:37.663 "name": "raid_bdev1", 00:20:37.663 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:37.663 "strip_size_kb": 64, 00:20:37.663 "state": "online", 00:20:37.663 "raid_level": "raid5f", 00:20:37.663 "superblock": true, 00:20:37.663 "num_base_bdevs": 4, 00:20:37.663 "num_base_bdevs_discovered": 4, 00:20:37.663 "num_base_bdevs_operational": 4, 00:20:37.663 "base_bdevs_list": [ 00:20:37.663 { 00:20:37.663 "name": "pt1", 00:20:37.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:37.663 "is_configured": true, 00:20:37.663 "data_offset": 2048, 00:20:37.663 "data_size": 63488 00:20:37.663 }, 00:20:37.663 { 00:20:37.663 "name": "pt2", 00:20:37.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:37.663 "is_configured": true, 00:20:37.663 "data_offset": 2048, 00:20:37.663 "data_size": 63488 00:20:37.663 }, 00:20:37.663 { 00:20:37.663 "name": "pt3", 00:20:37.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:37.663 "is_configured": true, 00:20:37.663 "data_offset": 2048, 00:20:37.663 "data_size": 63488 00:20:37.663 }, 00:20:37.663 { 00:20:37.663 "name": "pt4", 00:20:37.663 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:37.663 "is_configured": true, 00:20:37.663 "data_offset": 2048, 00:20:37.663 "data_size": 63488 00:20:37.663 } 00:20:37.663 ] 00:20:37.663 }' 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:37.663 07:17:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.232 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:38.232 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:38.232 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:38.232 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:38.232 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:20:38.232 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:38.232 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:38.232 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.232 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.232 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:38.232 [2024-11-20 07:17:35.493967] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.232 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.232 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:38.232 "name": "raid_bdev1", 00:20:38.232 "aliases": [ 00:20:38.232 "03652127-7c41-403d-87f6-f1be8f9fde0e" 00:20:38.232 ], 00:20:38.232 "product_name": "Raid Volume", 00:20:38.232 "block_size": 512, 00:20:38.232 "num_blocks": 190464, 00:20:38.232 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:38.232 "assigned_rate_limits": { 00:20:38.232 "rw_ios_per_sec": 0, 00:20:38.232 "rw_mbytes_per_sec": 0, 00:20:38.232 "r_mbytes_per_sec": 0, 00:20:38.232 "w_mbytes_per_sec": 0 00:20:38.232 }, 00:20:38.232 "claimed": false, 00:20:38.232 "zoned": false, 00:20:38.232 "supported_io_types": { 00:20:38.232 "read": true, 00:20:38.232 "write": true, 00:20:38.232 "unmap": false, 00:20:38.232 "flush": false, 00:20:38.232 "reset": true, 00:20:38.232 "nvme_admin": false, 00:20:38.232 "nvme_io": false, 00:20:38.232 "nvme_io_md": false, 00:20:38.232 "write_zeroes": true, 00:20:38.232 "zcopy": false, 00:20:38.233 "get_zone_info": false, 00:20:38.233 "zone_management": false, 00:20:38.233 "zone_append": false, 00:20:38.233 "compare": false, 00:20:38.233 "compare_and_write": false, 00:20:38.233 "abort": false, 00:20:38.233 "seek_hole": false, 00:20:38.233 "seek_data": false, 00:20:38.233 "copy": false, 00:20:38.233 "nvme_iov_md": false 00:20:38.233 }, 00:20:38.233 "driver_specific": { 00:20:38.233 "raid": { 00:20:38.233 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:38.233 "strip_size_kb": 64, 00:20:38.233 "state": "online", 00:20:38.233 "raid_level": "raid5f", 00:20:38.233 "superblock": true, 00:20:38.233 "num_base_bdevs": 4, 00:20:38.233 "num_base_bdevs_discovered": 4, 00:20:38.233 "num_base_bdevs_operational": 4, 00:20:38.233 "base_bdevs_list": [ 00:20:38.233 { 00:20:38.233 "name": "pt1", 00:20:38.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:38.233 "is_configured": true, 00:20:38.233 "data_offset": 2048, 00:20:38.233 "data_size": 63488 00:20:38.233 }, 00:20:38.233 { 00:20:38.233 "name": "pt2", 00:20:38.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:38.233 "is_configured": true, 00:20:38.233 "data_offset": 2048, 00:20:38.233 "data_size": 63488 00:20:38.233 }, 00:20:38.233 { 00:20:38.233 "name": "pt3", 00:20:38.233 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:38.233 "is_configured": true, 00:20:38.233 "data_offset": 2048, 00:20:38.233 "data_size": 63488 00:20:38.233 }, 00:20:38.233 { 00:20:38.233 "name": "pt4", 00:20:38.233 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:38.233 "is_configured": true, 00:20:38.233 "data_offset": 2048, 00:20:38.233 "data_size": 63488 00:20:38.233 } 00:20:38.233 ] 00:20:38.233 } 00:20:38.233 } 00:20:38.233 }' 00:20:38.233 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:38.496 pt2 00:20:38.496 pt3 00:20:38.496 pt4' 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.496 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:38.755 [2024-11-20 07:17:35.873992] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 03652127-7c41-403d-87f6-f1be8f9fde0e '!=' 03652127-7c41-403d-87f6-f1be8f9fde0e ']' 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.755 [2024-11-20 07:17:35.925912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.755 "name": "raid_bdev1", 00:20:38.755 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:38.755 "strip_size_kb": 64, 00:20:38.755 "state": "online", 00:20:38.755 "raid_level": "raid5f", 00:20:38.755 "superblock": true, 00:20:38.755 "num_base_bdevs": 4, 00:20:38.755 "num_base_bdevs_discovered": 3, 00:20:38.755 "num_base_bdevs_operational": 3, 00:20:38.755 "base_bdevs_list": [ 00:20:38.755 { 00:20:38.755 "name": null, 00:20:38.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.755 "is_configured": false, 00:20:38.755 "data_offset": 0, 00:20:38.755 "data_size": 63488 00:20:38.755 }, 00:20:38.755 { 00:20:38.755 "name": "pt2", 00:20:38.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:38.755 "is_configured": true, 00:20:38.755 "data_offset": 2048, 00:20:38.755 "data_size": 63488 00:20:38.755 }, 00:20:38.755 { 00:20:38.755 "name": "pt3", 00:20:38.755 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:38.755 "is_configured": true, 00:20:38.755 "data_offset": 2048, 00:20:38.755 "data_size": 63488 00:20:38.755 }, 00:20:38.755 { 00:20:38.755 "name": "pt4", 00:20:38.755 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:38.755 "is_configured": true, 00:20:38.755 "data_offset": 2048, 00:20:38.755 "data_size": 63488 00:20:38.755 } 00:20:38.755 ] 00:20:38.755 }' 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.755 07:17:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.323 [2024-11-20 07:17:36.433976] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:39.323 [2024-11-20 07:17:36.434024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:39.323 [2024-11-20 07:17:36.434123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:39.323 [2024-11-20 07:17:36.434226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:39.323 [2024-11-20 07:17:36.434244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.323 [2024-11-20 07:17:36.529975] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:39.323 [2024-11-20 07:17:36.530051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.323 [2024-11-20 07:17:36.530079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:20:39.323 [2024-11-20 07:17:36.530108] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.323 [2024-11-20 07:17:36.533370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.323 [2024-11-20 07:17:36.533652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:39.323 [2024-11-20 07:17:36.533792] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:39.323 [2024-11-20 07:17:36.533852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:39.323 pt2 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.323 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.324 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.324 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.324 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.324 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.324 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.324 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.324 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.324 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.324 "name": "raid_bdev1", 00:20:39.324 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:39.324 "strip_size_kb": 64, 00:20:39.324 "state": "configuring", 00:20:39.324 "raid_level": "raid5f", 00:20:39.324 "superblock": true, 00:20:39.324 "num_base_bdevs": 4, 00:20:39.324 "num_base_bdevs_discovered": 1, 00:20:39.324 "num_base_bdevs_operational": 3, 00:20:39.324 "base_bdevs_list": [ 00:20:39.324 { 00:20:39.324 "name": null, 00:20:39.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.324 "is_configured": false, 00:20:39.324 "data_offset": 2048, 00:20:39.324 "data_size": 63488 00:20:39.324 }, 00:20:39.324 { 00:20:39.324 "name": "pt2", 00:20:39.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:39.324 "is_configured": true, 00:20:39.324 "data_offset": 2048, 00:20:39.324 "data_size": 63488 00:20:39.324 }, 00:20:39.324 { 00:20:39.324 "name": null, 00:20:39.324 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:39.324 "is_configured": false, 00:20:39.324 "data_offset": 2048, 00:20:39.324 "data_size": 63488 00:20:39.324 }, 00:20:39.324 { 00:20:39.324 "name": null, 00:20:39.324 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:39.324 "is_configured": false, 00:20:39.324 "data_offset": 2048, 00:20:39.324 "data_size": 63488 00:20:39.324 } 00:20:39.324 ] 00:20:39.324 }' 00:20:39.324 07:17:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.324 07:17:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.890 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:39.890 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:39.890 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:39.890 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.890 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.890 [2024-11-20 07:17:37.086283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:39.890 [2024-11-20 07:17:37.086358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:39.891 [2024-11-20 07:17:37.086391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:20:39.891 [2024-11-20 07:17:37.086406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:39.891 [2024-11-20 07:17:37.086995] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:39.891 [2024-11-20 07:17:37.087022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:39.891 [2024-11-20 07:17:37.087126] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:39.891 [2024-11-20 07:17:37.087164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:39.891 pt3 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:39.891 "name": "raid_bdev1", 00:20:39.891 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:39.891 "strip_size_kb": 64, 00:20:39.891 "state": "configuring", 00:20:39.891 "raid_level": "raid5f", 00:20:39.891 "superblock": true, 00:20:39.891 "num_base_bdevs": 4, 00:20:39.891 "num_base_bdevs_discovered": 2, 00:20:39.891 "num_base_bdevs_operational": 3, 00:20:39.891 "base_bdevs_list": [ 00:20:39.891 { 00:20:39.891 "name": null, 00:20:39.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:39.891 "is_configured": false, 00:20:39.891 "data_offset": 2048, 00:20:39.891 "data_size": 63488 00:20:39.891 }, 00:20:39.891 { 00:20:39.891 "name": "pt2", 00:20:39.891 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:39.891 "is_configured": true, 00:20:39.891 "data_offset": 2048, 00:20:39.891 "data_size": 63488 00:20:39.891 }, 00:20:39.891 { 00:20:39.891 "name": "pt3", 00:20:39.891 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:39.891 "is_configured": true, 00:20:39.891 "data_offset": 2048, 00:20:39.891 "data_size": 63488 00:20:39.891 }, 00:20:39.891 { 00:20:39.891 "name": null, 00:20:39.891 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:39.891 "is_configured": false, 00:20:39.891 "data_offset": 2048, 00:20:39.891 "data_size": 63488 00:20:39.891 } 00:20:39.891 ] 00:20:39.891 }' 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:39.891 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.457 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:20:40.457 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:40.457 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:20:40.457 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:40.457 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.457 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.457 [2024-11-20 07:17:37.642514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:40.457 [2024-11-20 07:17:37.642598] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.457 [2024-11-20 07:17:37.642632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:40.457 [2024-11-20 07:17:37.642646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.457 [2024-11-20 07:17:37.643273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.457 [2024-11-20 07:17:37.643300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:40.457 [2024-11-20 07:17:37.643436] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:40.457 [2024-11-20 07:17:37.643468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:40.457 [2024-11-20 07:17:37.643650] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:40.457 [2024-11-20 07:17:37.643666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:40.457 [2024-11-20 07:17:37.644006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:20:40.457 [2024-11-20 07:17:37.650600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:40.458 [2024-11-20 07:17:37.652301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:40.458 [2024-11-20 07:17:37.652670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:40.458 pt4 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.458 "name": "raid_bdev1", 00:20:40.458 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:40.458 "strip_size_kb": 64, 00:20:40.458 "state": "online", 00:20:40.458 "raid_level": "raid5f", 00:20:40.458 "superblock": true, 00:20:40.458 "num_base_bdevs": 4, 00:20:40.458 "num_base_bdevs_discovered": 3, 00:20:40.458 "num_base_bdevs_operational": 3, 00:20:40.458 "base_bdevs_list": [ 00:20:40.458 { 00:20:40.458 "name": null, 00:20:40.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.458 "is_configured": false, 00:20:40.458 "data_offset": 2048, 00:20:40.458 "data_size": 63488 00:20:40.458 }, 00:20:40.458 { 00:20:40.458 "name": "pt2", 00:20:40.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:40.458 "is_configured": true, 00:20:40.458 "data_offset": 2048, 00:20:40.458 "data_size": 63488 00:20:40.458 }, 00:20:40.458 { 00:20:40.458 "name": "pt3", 00:20:40.458 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:40.458 "is_configured": true, 00:20:40.458 "data_offset": 2048, 00:20:40.458 "data_size": 63488 00:20:40.458 }, 00:20:40.458 { 00:20:40.458 "name": "pt4", 00:20:40.458 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:40.458 "is_configured": true, 00:20:40.458 "data_offset": 2048, 00:20:40.458 "data_size": 63488 00:20:40.458 } 00:20:40.458 ] 00:20:40.458 }' 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.458 07:17:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.026 [2024-11-20 07:17:38.180380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:41.026 [2024-11-20 07:17:38.180415] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:41.026 [2024-11-20 07:17:38.180523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:41.026 [2024-11-20 07:17:38.180618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:41.026 [2024-11-20 07:17:38.180640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.026 [2024-11-20 07:17:38.252378] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:41.026 [2024-11-20 07:17:38.252455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.026 [2024-11-20 07:17:38.252491] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:20:41.026 [2024-11-20 07:17:38.252508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.026 [2024-11-20 07:17:38.255577] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.026 [2024-11-20 07:17:38.255661] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:41.026 [2024-11-20 07:17:38.255765] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:41.026 [2024-11-20 07:17:38.255835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:41.026 [2024-11-20 07:17:38.256036] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:41.026 [2024-11-20 07:17:38.256061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:41.026 [2024-11-20 07:17:38.256082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:41.026 [2024-11-20 07:17:38.256155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:41.026 [2024-11-20 07:17:38.256314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:41.026 pt1 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.026 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.026 "name": "raid_bdev1", 00:20:41.026 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:41.026 "strip_size_kb": 64, 00:20:41.026 "state": "configuring", 00:20:41.026 "raid_level": "raid5f", 00:20:41.026 "superblock": true, 00:20:41.026 "num_base_bdevs": 4, 00:20:41.026 "num_base_bdevs_discovered": 2, 00:20:41.026 "num_base_bdevs_operational": 3, 00:20:41.026 "base_bdevs_list": [ 00:20:41.027 { 00:20:41.027 "name": null, 00:20:41.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.027 "is_configured": false, 00:20:41.027 "data_offset": 2048, 00:20:41.027 "data_size": 63488 00:20:41.027 }, 00:20:41.027 { 00:20:41.027 "name": "pt2", 00:20:41.027 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.027 "is_configured": true, 00:20:41.027 "data_offset": 2048, 00:20:41.027 "data_size": 63488 00:20:41.027 }, 00:20:41.027 { 00:20:41.027 "name": "pt3", 00:20:41.027 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:41.027 "is_configured": true, 00:20:41.027 "data_offset": 2048, 00:20:41.027 "data_size": 63488 00:20:41.027 }, 00:20:41.027 { 00:20:41.027 "name": null, 00:20:41.027 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:41.027 "is_configured": false, 00:20:41.027 "data_offset": 2048, 00:20:41.027 "data_size": 63488 00:20:41.027 } 00:20:41.027 ] 00:20:41.027 }' 00:20:41.027 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.027 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.594 [2024-11-20 07:17:38.844682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:20:41.594 [2024-11-20 07:17:38.844919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.594 [2024-11-20 07:17:38.844970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:20:41.594 [2024-11-20 07:17:38.844986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.594 [2024-11-20 07:17:38.845549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.594 [2024-11-20 07:17:38.845575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:20:41.594 [2024-11-20 07:17:38.845688] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:20:41.594 [2024-11-20 07:17:38.845728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:20:41.594 [2024-11-20 07:17:38.845920] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:41.594 [2024-11-20 07:17:38.845937] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:20:41.594 [2024-11-20 07:17:38.846247] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:41.594 [2024-11-20 07:17:38.852661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:41.594 pt4 00:20:41.594 [2024-11-20 07:17:38.852829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:41.594 [2024-11-20 07:17:38.853180] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.594 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:41.594 "name": "raid_bdev1", 00:20:41.594 "uuid": "03652127-7c41-403d-87f6-f1be8f9fde0e", 00:20:41.594 "strip_size_kb": 64, 00:20:41.594 "state": "online", 00:20:41.594 "raid_level": "raid5f", 00:20:41.594 "superblock": true, 00:20:41.594 "num_base_bdevs": 4, 00:20:41.594 "num_base_bdevs_discovered": 3, 00:20:41.594 "num_base_bdevs_operational": 3, 00:20:41.594 "base_bdevs_list": [ 00:20:41.594 { 00:20:41.594 "name": null, 00:20:41.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.594 "is_configured": false, 00:20:41.594 "data_offset": 2048, 00:20:41.594 "data_size": 63488 00:20:41.594 }, 00:20:41.594 { 00:20:41.594 "name": "pt2", 00:20:41.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:41.594 "is_configured": true, 00:20:41.594 "data_offset": 2048, 00:20:41.594 "data_size": 63488 00:20:41.594 }, 00:20:41.594 { 00:20:41.594 "name": "pt3", 00:20:41.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:20:41.594 "is_configured": true, 00:20:41.594 "data_offset": 2048, 00:20:41.594 "data_size": 63488 00:20:41.594 }, 00:20:41.594 { 00:20:41.594 "name": "pt4", 00:20:41.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:20:41.594 "is_configured": true, 00:20:41.594 "data_offset": 2048, 00:20:41.594 "data_size": 63488 00:20:41.595 } 00:20:41.595 ] 00:20:41.595 }' 00:20:41.595 07:17:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:41.595 07:17:38 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.161 07:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:42.161 07:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:42.161 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.161 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.161 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.161 07:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:42.161 07:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:42.161 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.161 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.162 07:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:42.162 [2024-11-20 07:17:39.457065] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:42.162 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.420 07:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 03652127-7c41-403d-87f6-f1be8f9fde0e '!=' 03652127-7c41-403d-87f6-f1be8f9fde0e ']' 00:20:42.420 07:17:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84441 00:20:42.420 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84441 ']' 00:20:42.420 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84441 00:20:42.420 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:20:42.420 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.420 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84441 00:20:42.420 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.420 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.420 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84441' 00:20:42.420 killing process with pid 84441 00:20:42.420 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84441 00:20:42.420 [2024-11-20 07:17:39.538682] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:42.420 07:17:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84441 00:20:42.420 [2024-11-20 07:17:39.538946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.420 [2024-11-20 07:17:39.539056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.421 [2024-11-20 07:17:39.539077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:42.685 [2024-11-20 07:17:39.904098] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:43.625 07:17:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:20:43.625 00:20:43.625 real 0m9.849s 00:20:43.625 user 0m16.286s 00:20:43.625 sys 0m1.395s 00:20:43.625 ************************************ 00:20:43.625 END TEST raid5f_superblock_test 00:20:43.625 ************************************ 00:20:43.625 07:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.625 07:17:40 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.884 07:17:40 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:20:43.884 07:17:40 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:20:43.884 07:17:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:43.884 07:17:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.884 07:17:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:43.884 ************************************ 00:20:43.884 START TEST raid5f_rebuild_test 00:20:43.884 ************************************ 00:20:43.884 07:17:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:20:43.884 07:17:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:20:43.884 07:17:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:20:43.884 07:17:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:20:43.884 07:17:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:43.884 07:17:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:43.884 07:17:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:43.884 07:17:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:43.884 07:17:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:43.884 07:17:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:43.884 07:17:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:43.884 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:43.884 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:43.884 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:43.884 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:20:43.884 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:43.884 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:43.884 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:20:43.884 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:43.884 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:43.884 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84938 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84938 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84938 ']' 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.885 07:17:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:43.885 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:43.885 Zero copy mechanism will not be used. 00:20:43.885 [2024-11-20 07:17:41.101603] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:20:43.885 [2024-11-20 07:17:41.101765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84938 ] 00:20:44.143 [2024-11-20 07:17:41.284089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.143 [2024-11-20 07:17:41.439586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.402 [2024-11-20 07:17:41.670080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:44.402 [2024-11-20 07:17:41.670130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:45.007 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.007 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:20:45.007 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:45.007 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:45.007 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.007 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.007 BaseBdev1_malloc 00:20:45.007 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.007 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:45.007 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.007 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.008 [2024-11-20 07:17:42.148339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:45.008 [2024-11-20 07:17:42.148780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.008 [2024-11-20 07:17:42.148834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:45.008 [2024-11-20 07:17:42.148854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.008 [2024-11-20 07:17:42.151822] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.008 [2024-11-20 07:17:42.152034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:45.008 BaseBdev1 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.008 BaseBdev2_malloc 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.008 [2024-11-20 07:17:42.205140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:45.008 [2024-11-20 07:17:42.205225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.008 [2024-11-20 07:17:42.205257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:45.008 [2024-11-20 07:17:42.205279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.008 [2024-11-20 07:17:42.208117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.008 [2024-11-20 07:17:42.208169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:45.008 BaseBdev2 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.008 BaseBdev3_malloc 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.008 [2024-11-20 07:17:42.272506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:45.008 [2024-11-20 07:17:42.272827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.008 [2024-11-20 07:17:42.272897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:45.008 [2024-11-20 07:17:42.272922] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.008 [2024-11-20 07:17:42.275970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.008 [2024-11-20 07:17:42.276034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:45.008 BaseBdev3 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.008 BaseBdev4_malloc 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.008 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.266 [2024-11-20 07:17:42.329667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:45.266 [2024-11-20 07:17:42.329750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.266 [2024-11-20 07:17:42.329781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:45.266 [2024-11-20 07:17:42.329799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.266 [2024-11-20 07:17:42.332716] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.267 [2024-11-20 07:17:42.332776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:45.267 BaseBdev4 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.267 spare_malloc 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.267 spare_delay 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.267 [2024-11-20 07:17:42.391203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:45.267 [2024-11-20 07:17:42.391432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:45.267 [2024-11-20 07:17:42.391474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:45.267 [2024-11-20 07:17:42.391493] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:45.267 [2024-11-20 07:17:42.394311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:45.267 [2024-11-20 07:17:42.394364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:45.267 spare 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.267 [2024-11-20 07:17:42.399347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:45.267 [2024-11-20 07:17:42.401793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:45.267 [2024-11-20 07:17:42.401909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:45.267 [2024-11-20 07:17:42.401999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:45.267 [2024-11-20 07:17:42.402133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:45.267 [2024-11-20 07:17:42.402155] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:20:45.267 [2024-11-20 07:17:42.402496] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:45.267 [2024-11-20 07:17:42.409363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:45.267 [2024-11-20 07:17:42.409505] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:45.267 [2024-11-20 07:17:42.409982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.267 "name": "raid_bdev1", 00:20:45.267 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:45.267 "strip_size_kb": 64, 00:20:45.267 "state": "online", 00:20:45.267 "raid_level": "raid5f", 00:20:45.267 "superblock": false, 00:20:45.267 "num_base_bdevs": 4, 00:20:45.267 "num_base_bdevs_discovered": 4, 00:20:45.267 "num_base_bdevs_operational": 4, 00:20:45.267 "base_bdevs_list": [ 00:20:45.267 { 00:20:45.267 "name": "BaseBdev1", 00:20:45.267 "uuid": "a3fa1ddc-f601-58b4-a25b-36155f7dcf3b", 00:20:45.267 "is_configured": true, 00:20:45.267 "data_offset": 0, 00:20:45.267 "data_size": 65536 00:20:45.267 }, 00:20:45.267 { 00:20:45.267 "name": "BaseBdev2", 00:20:45.267 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:45.267 "is_configured": true, 00:20:45.267 "data_offset": 0, 00:20:45.267 "data_size": 65536 00:20:45.267 }, 00:20:45.267 { 00:20:45.267 "name": "BaseBdev3", 00:20:45.267 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:45.267 "is_configured": true, 00:20:45.267 "data_offset": 0, 00:20:45.267 "data_size": 65536 00:20:45.267 }, 00:20:45.267 { 00:20:45.267 "name": "BaseBdev4", 00:20:45.267 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:45.267 "is_configured": true, 00:20:45.267 "data_offset": 0, 00:20:45.267 "data_size": 65536 00:20:45.267 } 00:20:45.267 ] 00:20:45.267 }' 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.267 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.834 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:45.834 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:45.834 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.834 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.834 [2024-11-20 07:17:42.933959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:45.834 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.834 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:20:45.834 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.834 07:17:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:45.834 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.834 07:17:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:45.834 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:46.092 [2024-11-20 07:17:43.345845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:20:46.092 /dev/nbd0 00:20:46.092 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:46.092 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:46.092 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:46.092 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:20:46.093 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:46.093 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:46.093 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:46.093 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:20:46.093 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:46.093 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:46.093 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:46.093 1+0 records in 00:20:46.093 1+0 records out 00:20:46.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278829 s, 14.7 MB/s 00:20:46.093 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.093 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:20:46.093 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:46.351 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:46.351 07:17:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:20:46.351 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:46.351 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:46.351 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:20:46.351 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:20:46.351 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:20:46.351 07:17:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:20:46.919 512+0 records in 00:20:46.919 512+0 records out 00:20:46.919 100663296 bytes (101 MB, 96 MiB) copied, 0.631275 s, 159 MB/s 00:20:46.919 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:46.919 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:46.919 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:46.919 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:46.919 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:20:46.919 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:46.919 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:47.179 [2024-11-20 07:17:44.365624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.179 [2024-11-20 07:17:44.377362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:47.179 07:17:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.180 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:47.180 07:17:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.180 07:17:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.180 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:47.180 "name": "raid_bdev1", 00:20:47.180 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:47.180 "strip_size_kb": 64, 00:20:47.180 "state": "online", 00:20:47.180 "raid_level": "raid5f", 00:20:47.180 "superblock": false, 00:20:47.180 "num_base_bdevs": 4, 00:20:47.180 "num_base_bdevs_discovered": 3, 00:20:47.180 "num_base_bdevs_operational": 3, 00:20:47.180 "base_bdevs_list": [ 00:20:47.180 { 00:20:47.180 "name": null, 00:20:47.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:47.180 "is_configured": false, 00:20:47.180 "data_offset": 0, 00:20:47.180 "data_size": 65536 00:20:47.180 }, 00:20:47.180 { 00:20:47.180 "name": "BaseBdev2", 00:20:47.180 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:47.180 "is_configured": true, 00:20:47.180 "data_offset": 0, 00:20:47.180 "data_size": 65536 00:20:47.180 }, 00:20:47.180 { 00:20:47.180 "name": "BaseBdev3", 00:20:47.180 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:47.180 "is_configured": true, 00:20:47.180 "data_offset": 0, 00:20:47.180 "data_size": 65536 00:20:47.180 }, 00:20:47.180 { 00:20:47.180 "name": "BaseBdev4", 00:20:47.180 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:47.180 "is_configured": true, 00:20:47.180 "data_offset": 0, 00:20:47.180 "data_size": 65536 00:20:47.180 } 00:20:47.180 ] 00:20:47.180 }' 00:20:47.180 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:47.180 07:17:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.746 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:47.746 07:17:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.746 07:17:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:47.747 [2024-11-20 07:17:44.917506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:47.747 [2024-11-20 07:17:44.931752] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:20:47.747 07:17:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.747 07:17:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:47.747 [2024-11-20 07:17:44.940980] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:48.680 07:17:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:48.680 07:17:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:48.680 07:17:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:48.680 07:17:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:48.680 07:17:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:48.680 07:17:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.680 07:17:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.680 07:17:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.680 07:17:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.680 07:17:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.680 07:17:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:48.680 "name": "raid_bdev1", 00:20:48.680 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:48.680 "strip_size_kb": 64, 00:20:48.680 "state": "online", 00:20:48.680 "raid_level": "raid5f", 00:20:48.680 "superblock": false, 00:20:48.680 "num_base_bdevs": 4, 00:20:48.680 "num_base_bdevs_discovered": 4, 00:20:48.680 "num_base_bdevs_operational": 4, 00:20:48.680 "process": { 00:20:48.680 "type": "rebuild", 00:20:48.680 "target": "spare", 00:20:48.680 "progress": { 00:20:48.680 "blocks": 17280, 00:20:48.680 "percent": 8 00:20:48.680 } 00:20:48.680 }, 00:20:48.680 "base_bdevs_list": [ 00:20:48.680 { 00:20:48.680 "name": "spare", 00:20:48.680 "uuid": "fd802e6e-ca23-552e-9a67-c64265905c78", 00:20:48.680 "is_configured": true, 00:20:48.680 "data_offset": 0, 00:20:48.680 "data_size": 65536 00:20:48.680 }, 00:20:48.680 { 00:20:48.680 "name": "BaseBdev2", 00:20:48.680 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:48.680 "is_configured": true, 00:20:48.680 "data_offset": 0, 00:20:48.680 "data_size": 65536 00:20:48.680 }, 00:20:48.680 { 00:20:48.680 "name": "BaseBdev3", 00:20:48.680 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:48.680 "is_configured": true, 00:20:48.680 "data_offset": 0, 00:20:48.680 "data_size": 65536 00:20:48.680 }, 00:20:48.680 { 00:20:48.680 "name": "BaseBdev4", 00:20:48.680 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:48.680 "is_configured": true, 00:20:48.680 "data_offset": 0, 00:20:48.680 "data_size": 65536 00:20:48.680 } 00:20:48.680 ] 00:20:48.680 }' 00:20:48.680 07:17:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:48.938 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:48.938 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:48.938 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:48.938 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:48.938 07:17:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.939 [2024-11-20 07:17:46.090475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:48.939 [2024-11-20 07:17:46.154521] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:48.939 [2024-11-20 07:17:46.154640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.939 [2024-11-20 07:17:46.154669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:48.939 [2024-11-20 07:17:46.154686] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.939 "name": "raid_bdev1", 00:20:48.939 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:48.939 "strip_size_kb": 64, 00:20:48.939 "state": "online", 00:20:48.939 "raid_level": "raid5f", 00:20:48.939 "superblock": false, 00:20:48.939 "num_base_bdevs": 4, 00:20:48.939 "num_base_bdevs_discovered": 3, 00:20:48.939 "num_base_bdevs_operational": 3, 00:20:48.939 "base_bdevs_list": [ 00:20:48.939 { 00:20:48.939 "name": null, 00:20:48.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:48.939 "is_configured": false, 00:20:48.939 "data_offset": 0, 00:20:48.939 "data_size": 65536 00:20:48.939 }, 00:20:48.939 { 00:20:48.939 "name": "BaseBdev2", 00:20:48.939 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:48.939 "is_configured": true, 00:20:48.939 "data_offset": 0, 00:20:48.939 "data_size": 65536 00:20:48.939 }, 00:20:48.939 { 00:20:48.939 "name": "BaseBdev3", 00:20:48.939 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:48.939 "is_configured": true, 00:20:48.939 "data_offset": 0, 00:20:48.939 "data_size": 65536 00:20:48.939 }, 00:20:48.939 { 00:20:48.939 "name": "BaseBdev4", 00:20:48.939 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:48.939 "is_configured": true, 00:20:48.939 "data_offset": 0, 00:20:48.939 "data_size": 65536 00:20:48.939 } 00:20:48.939 ] 00:20:48.939 }' 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.939 07:17:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.504 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:49.504 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:49.504 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:49.504 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:49.504 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:49.504 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.504 07:17:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.504 07:17:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.504 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.504 07:17:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.504 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:49.504 "name": "raid_bdev1", 00:20:49.504 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:49.504 "strip_size_kb": 64, 00:20:49.504 "state": "online", 00:20:49.504 "raid_level": "raid5f", 00:20:49.504 "superblock": false, 00:20:49.504 "num_base_bdevs": 4, 00:20:49.504 "num_base_bdevs_discovered": 3, 00:20:49.504 "num_base_bdevs_operational": 3, 00:20:49.504 "base_bdevs_list": [ 00:20:49.504 { 00:20:49.505 "name": null, 00:20:49.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:49.505 "is_configured": false, 00:20:49.505 "data_offset": 0, 00:20:49.505 "data_size": 65536 00:20:49.505 }, 00:20:49.505 { 00:20:49.505 "name": "BaseBdev2", 00:20:49.505 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:49.505 "is_configured": true, 00:20:49.505 "data_offset": 0, 00:20:49.505 "data_size": 65536 00:20:49.505 }, 00:20:49.505 { 00:20:49.505 "name": "BaseBdev3", 00:20:49.505 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:49.505 "is_configured": true, 00:20:49.505 "data_offset": 0, 00:20:49.505 "data_size": 65536 00:20:49.505 }, 00:20:49.505 { 00:20:49.505 "name": "BaseBdev4", 00:20:49.505 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:49.505 "is_configured": true, 00:20:49.505 "data_offset": 0, 00:20:49.505 "data_size": 65536 00:20:49.505 } 00:20:49.505 ] 00:20:49.505 }' 00:20:49.505 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:49.763 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:49.763 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:49.763 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:49.763 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:49.763 07:17:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.763 07:17:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:49.763 [2024-11-20 07:17:46.914749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:49.763 [2024-11-20 07:17:46.928343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:20:49.763 07:17:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.763 07:17:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:49.763 [2024-11-20 07:17:46.937319] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:50.697 07:17:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.697 07:17:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.697 07:17:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:50.697 07:17:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:50.697 07:17:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.697 07:17:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.697 07:17:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.697 07:17:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.697 07:17:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.697 07:17:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.697 07:17:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.697 "name": "raid_bdev1", 00:20:50.697 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:50.697 "strip_size_kb": 64, 00:20:50.697 "state": "online", 00:20:50.697 "raid_level": "raid5f", 00:20:50.697 "superblock": false, 00:20:50.697 "num_base_bdevs": 4, 00:20:50.697 "num_base_bdevs_discovered": 4, 00:20:50.697 "num_base_bdevs_operational": 4, 00:20:50.697 "process": { 00:20:50.697 "type": "rebuild", 00:20:50.697 "target": "spare", 00:20:50.697 "progress": { 00:20:50.697 "blocks": 17280, 00:20:50.697 "percent": 8 00:20:50.697 } 00:20:50.697 }, 00:20:50.697 "base_bdevs_list": [ 00:20:50.697 { 00:20:50.697 "name": "spare", 00:20:50.697 "uuid": "fd802e6e-ca23-552e-9a67-c64265905c78", 00:20:50.697 "is_configured": true, 00:20:50.697 "data_offset": 0, 00:20:50.697 "data_size": 65536 00:20:50.697 }, 00:20:50.697 { 00:20:50.697 "name": "BaseBdev2", 00:20:50.697 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:50.697 "is_configured": true, 00:20:50.697 "data_offset": 0, 00:20:50.697 "data_size": 65536 00:20:50.697 }, 00:20:50.697 { 00:20:50.697 "name": "BaseBdev3", 00:20:50.697 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:50.697 "is_configured": true, 00:20:50.697 "data_offset": 0, 00:20:50.697 "data_size": 65536 00:20:50.697 }, 00:20:50.697 { 00:20:50.697 "name": "BaseBdev4", 00:20:50.697 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:50.697 "is_configured": true, 00:20:50.697 "data_offset": 0, 00:20:50.697 "data_size": 65536 00:20:50.697 } 00:20:50.697 ] 00:20:50.697 }' 00:20:50.697 07:17:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=673 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:50.955 "name": "raid_bdev1", 00:20:50.955 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:50.955 "strip_size_kb": 64, 00:20:50.955 "state": "online", 00:20:50.955 "raid_level": "raid5f", 00:20:50.955 "superblock": false, 00:20:50.955 "num_base_bdevs": 4, 00:20:50.955 "num_base_bdevs_discovered": 4, 00:20:50.955 "num_base_bdevs_operational": 4, 00:20:50.955 "process": { 00:20:50.955 "type": "rebuild", 00:20:50.955 "target": "spare", 00:20:50.955 "progress": { 00:20:50.955 "blocks": 21120, 00:20:50.955 "percent": 10 00:20:50.955 } 00:20:50.955 }, 00:20:50.955 "base_bdevs_list": [ 00:20:50.955 { 00:20:50.955 "name": "spare", 00:20:50.955 "uuid": "fd802e6e-ca23-552e-9a67-c64265905c78", 00:20:50.955 "is_configured": true, 00:20:50.955 "data_offset": 0, 00:20:50.955 "data_size": 65536 00:20:50.955 }, 00:20:50.955 { 00:20:50.955 "name": "BaseBdev2", 00:20:50.955 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:50.955 "is_configured": true, 00:20:50.955 "data_offset": 0, 00:20:50.955 "data_size": 65536 00:20:50.955 }, 00:20:50.955 { 00:20:50.955 "name": "BaseBdev3", 00:20:50.955 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:50.955 "is_configured": true, 00:20:50.955 "data_offset": 0, 00:20:50.955 "data_size": 65536 00:20:50.955 }, 00:20:50.955 { 00:20:50.955 "name": "BaseBdev4", 00:20:50.955 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:50.955 "is_configured": true, 00:20:50.955 "data_offset": 0, 00:20:50.955 "data_size": 65536 00:20:50.955 } 00:20:50.955 ] 00:20:50.955 }' 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:50.955 07:17:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:52.341 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:52.341 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:52.341 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:52.341 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:52.341 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:52.341 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:52.341 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.341 07:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.341 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.341 07:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:52.341 07:17:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.341 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:52.341 "name": "raid_bdev1", 00:20:52.341 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:52.341 "strip_size_kb": 64, 00:20:52.341 "state": "online", 00:20:52.341 "raid_level": "raid5f", 00:20:52.341 "superblock": false, 00:20:52.341 "num_base_bdevs": 4, 00:20:52.341 "num_base_bdevs_discovered": 4, 00:20:52.341 "num_base_bdevs_operational": 4, 00:20:52.341 "process": { 00:20:52.341 "type": "rebuild", 00:20:52.341 "target": "spare", 00:20:52.341 "progress": { 00:20:52.341 "blocks": 42240, 00:20:52.341 "percent": 21 00:20:52.341 } 00:20:52.342 }, 00:20:52.342 "base_bdevs_list": [ 00:20:52.342 { 00:20:52.342 "name": "spare", 00:20:52.342 "uuid": "fd802e6e-ca23-552e-9a67-c64265905c78", 00:20:52.342 "is_configured": true, 00:20:52.342 "data_offset": 0, 00:20:52.342 "data_size": 65536 00:20:52.342 }, 00:20:52.342 { 00:20:52.342 "name": "BaseBdev2", 00:20:52.342 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:52.342 "is_configured": true, 00:20:52.342 "data_offset": 0, 00:20:52.342 "data_size": 65536 00:20:52.342 }, 00:20:52.342 { 00:20:52.342 "name": "BaseBdev3", 00:20:52.342 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:52.342 "is_configured": true, 00:20:52.342 "data_offset": 0, 00:20:52.342 "data_size": 65536 00:20:52.342 }, 00:20:52.342 { 00:20:52.342 "name": "BaseBdev4", 00:20:52.342 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:52.342 "is_configured": true, 00:20:52.342 "data_offset": 0, 00:20:52.342 "data_size": 65536 00:20:52.342 } 00:20:52.342 ] 00:20:52.342 }' 00:20:52.342 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:52.342 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:52.342 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:52.342 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:52.342 07:17:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:53.278 "name": "raid_bdev1", 00:20:53.278 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:53.278 "strip_size_kb": 64, 00:20:53.278 "state": "online", 00:20:53.278 "raid_level": "raid5f", 00:20:53.278 "superblock": false, 00:20:53.278 "num_base_bdevs": 4, 00:20:53.278 "num_base_bdevs_discovered": 4, 00:20:53.278 "num_base_bdevs_operational": 4, 00:20:53.278 "process": { 00:20:53.278 "type": "rebuild", 00:20:53.278 "target": "spare", 00:20:53.278 "progress": { 00:20:53.278 "blocks": 65280, 00:20:53.278 "percent": 33 00:20:53.278 } 00:20:53.278 }, 00:20:53.278 "base_bdevs_list": [ 00:20:53.278 { 00:20:53.278 "name": "spare", 00:20:53.278 "uuid": "fd802e6e-ca23-552e-9a67-c64265905c78", 00:20:53.278 "is_configured": true, 00:20:53.278 "data_offset": 0, 00:20:53.278 "data_size": 65536 00:20:53.278 }, 00:20:53.278 { 00:20:53.278 "name": "BaseBdev2", 00:20:53.278 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:53.278 "is_configured": true, 00:20:53.278 "data_offset": 0, 00:20:53.278 "data_size": 65536 00:20:53.278 }, 00:20:53.278 { 00:20:53.278 "name": "BaseBdev3", 00:20:53.278 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:53.278 "is_configured": true, 00:20:53.278 "data_offset": 0, 00:20:53.278 "data_size": 65536 00:20:53.278 }, 00:20:53.278 { 00:20:53.278 "name": "BaseBdev4", 00:20:53.278 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:53.278 "is_configured": true, 00:20:53.278 "data_offset": 0, 00:20:53.278 "data_size": 65536 00:20:53.278 } 00:20:53.278 ] 00:20:53.278 }' 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.278 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:53.537 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.537 07:17:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:54.473 "name": "raid_bdev1", 00:20:54.473 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:54.473 "strip_size_kb": 64, 00:20:54.473 "state": "online", 00:20:54.473 "raid_level": "raid5f", 00:20:54.473 "superblock": false, 00:20:54.473 "num_base_bdevs": 4, 00:20:54.473 "num_base_bdevs_discovered": 4, 00:20:54.473 "num_base_bdevs_operational": 4, 00:20:54.473 "process": { 00:20:54.473 "type": "rebuild", 00:20:54.473 "target": "spare", 00:20:54.473 "progress": { 00:20:54.473 "blocks": 88320, 00:20:54.473 "percent": 44 00:20:54.473 } 00:20:54.473 }, 00:20:54.473 "base_bdevs_list": [ 00:20:54.473 { 00:20:54.473 "name": "spare", 00:20:54.473 "uuid": "fd802e6e-ca23-552e-9a67-c64265905c78", 00:20:54.473 "is_configured": true, 00:20:54.473 "data_offset": 0, 00:20:54.473 "data_size": 65536 00:20:54.473 }, 00:20:54.473 { 00:20:54.473 "name": "BaseBdev2", 00:20:54.473 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:54.473 "is_configured": true, 00:20:54.473 "data_offset": 0, 00:20:54.473 "data_size": 65536 00:20:54.473 }, 00:20:54.473 { 00:20:54.473 "name": "BaseBdev3", 00:20:54.473 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:54.473 "is_configured": true, 00:20:54.473 "data_offset": 0, 00:20:54.473 "data_size": 65536 00:20:54.473 }, 00:20:54.473 { 00:20:54.473 "name": "BaseBdev4", 00:20:54.473 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:54.473 "is_configured": true, 00:20:54.473 "data_offset": 0, 00:20:54.473 "data_size": 65536 00:20:54.473 } 00:20:54.473 ] 00:20:54.473 }' 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.473 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:54.731 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.731 07:17:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:55.666 "name": "raid_bdev1", 00:20:55.666 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:55.666 "strip_size_kb": 64, 00:20:55.666 "state": "online", 00:20:55.666 "raid_level": "raid5f", 00:20:55.666 "superblock": false, 00:20:55.666 "num_base_bdevs": 4, 00:20:55.666 "num_base_bdevs_discovered": 4, 00:20:55.666 "num_base_bdevs_operational": 4, 00:20:55.666 "process": { 00:20:55.666 "type": "rebuild", 00:20:55.666 "target": "spare", 00:20:55.666 "progress": { 00:20:55.666 "blocks": 111360, 00:20:55.666 "percent": 56 00:20:55.666 } 00:20:55.666 }, 00:20:55.666 "base_bdevs_list": [ 00:20:55.666 { 00:20:55.666 "name": "spare", 00:20:55.666 "uuid": "fd802e6e-ca23-552e-9a67-c64265905c78", 00:20:55.666 "is_configured": true, 00:20:55.666 "data_offset": 0, 00:20:55.666 "data_size": 65536 00:20:55.666 }, 00:20:55.666 { 00:20:55.666 "name": "BaseBdev2", 00:20:55.666 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:55.666 "is_configured": true, 00:20:55.666 "data_offset": 0, 00:20:55.666 "data_size": 65536 00:20:55.666 }, 00:20:55.666 { 00:20:55.666 "name": "BaseBdev3", 00:20:55.666 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:55.666 "is_configured": true, 00:20:55.666 "data_offset": 0, 00:20:55.666 "data_size": 65536 00:20:55.666 }, 00:20:55.666 { 00:20:55.666 "name": "BaseBdev4", 00:20:55.666 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:55.666 "is_configured": true, 00:20:55.666 "data_offset": 0, 00:20:55.666 "data_size": 65536 00:20:55.666 } 00:20:55.666 ] 00:20:55.666 }' 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:55.666 07:17:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:57.042 07:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:57.042 07:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.042 07:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.042 07:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.042 07:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.042 07:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.042 07:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.042 07:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.042 07:17:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.042 07:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.042 07:17:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.042 07:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.042 "name": "raid_bdev1", 00:20:57.042 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:57.042 "strip_size_kb": 64, 00:20:57.042 "state": "online", 00:20:57.042 "raid_level": "raid5f", 00:20:57.042 "superblock": false, 00:20:57.042 "num_base_bdevs": 4, 00:20:57.042 "num_base_bdevs_discovered": 4, 00:20:57.042 "num_base_bdevs_operational": 4, 00:20:57.042 "process": { 00:20:57.042 "type": "rebuild", 00:20:57.042 "target": "spare", 00:20:57.042 "progress": { 00:20:57.042 "blocks": 132480, 00:20:57.042 "percent": 67 00:20:57.042 } 00:20:57.042 }, 00:20:57.042 "base_bdevs_list": [ 00:20:57.042 { 00:20:57.042 "name": "spare", 00:20:57.042 "uuid": "fd802e6e-ca23-552e-9a67-c64265905c78", 00:20:57.042 "is_configured": true, 00:20:57.042 "data_offset": 0, 00:20:57.042 "data_size": 65536 00:20:57.042 }, 00:20:57.042 { 00:20:57.042 "name": "BaseBdev2", 00:20:57.042 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:57.042 "is_configured": true, 00:20:57.042 "data_offset": 0, 00:20:57.042 "data_size": 65536 00:20:57.042 }, 00:20:57.042 { 00:20:57.042 "name": "BaseBdev3", 00:20:57.042 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:57.042 "is_configured": true, 00:20:57.042 "data_offset": 0, 00:20:57.042 "data_size": 65536 00:20:57.042 }, 00:20:57.042 { 00:20:57.042 "name": "BaseBdev4", 00:20:57.042 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:57.042 "is_configured": true, 00:20:57.042 "data_offset": 0, 00:20:57.042 "data_size": 65536 00:20:57.042 } 00:20:57.042 ] 00:20:57.042 }' 00:20:57.042 07:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.042 07:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.042 07:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.042 07:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.042 07:17:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:57.977 "name": "raid_bdev1", 00:20:57.977 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:57.977 "strip_size_kb": 64, 00:20:57.977 "state": "online", 00:20:57.977 "raid_level": "raid5f", 00:20:57.977 "superblock": false, 00:20:57.977 "num_base_bdevs": 4, 00:20:57.977 "num_base_bdevs_discovered": 4, 00:20:57.977 "num_base_bdevs_operational": 4, 00:20:57.977 "process": { 00:20:57.977 "type": "rebuild", 00:20:57.977 "target": "spare", 00:20:57.977 "progress": { 00:20:57.977 "blocks": 155520, 00:20:57.977 "percent": 79 00:20:57.977 } 00:20:57.977 }, 00:20:57.977 "base_bdevs_list": [ 00:20:57.977 { 00:20:57.977 "name": "spare", 00:20:57.977 "uuid": "fd802e6e-ca23-552e-9a67-c64265905c78", 00:20:57.977 "is_configured": true, 00:20:57.977 "data_offset": 0, 00:20:57.977 "data_size": 65536 00:20:57.977 }, 00:20:57.977 { 00:20:57.977 "name": "BaseBdev2", 00:20:57.977 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:57.977 "is_configured": true, 00:20:57.977 "data_offset": 0, 00:20:57.977 "data_size": 65536 00:20:57.977 }, 00:20:57.977 { 00:20:57.977 "name": "BaseBdev3", 00:20:57.977 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:57.977 "is_configured": true, 00:20:57.977 "data_offset": 0, 00:20:57.977 "data_size": 65536 00:20:57.977 }, 00:20:57.977 { 00:20:57.977 "name": "BaseBdev4", 00:20:57.977 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:57.977 "is_configured": true, 00:20:57.977 "data_offset": 0, 00:20:57.977 "data_size": 65536 00:20:57.977 } 00:20:57.977 ] 00:20:57.977 }' 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.977 07:17:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:59.352 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:59.352 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.352 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.352 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.352 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.352 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.352 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.352 07:17:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.352 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.352 07:17:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:20:59.352 07:17:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.352 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:59.352 "name": "raid_bdev1", 00:20:59.352 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:20:59.352 "strip_size_kb": 64, 00:20:59.352 "state": "online", 00:20:59.352 "raid_level": "raid5f", 00:20:59.352 "superblock": false, 00:20:59.352 "num_base_bdevs": 4, 00:20:59.352 "num_base_bdevs_discovered": 4, 00:20:59.352 "num_base_bdevs_operational": 4, 00:20:59.352 "process": { 00:20:59.352 "type": "rebuild", 00:20:59.352 "target": "spare", 00:20:59.352 "progress": { 00:20:59.352 "blocks": 176640, 00:20:59.352 "percent": 89 00:20:59.352 } 00:20:59.352 }, 00:20:59.352 "base_bdevs_list": [ 00:20:59.352 { 00:20:59.352 "name": "spare", 00:20:59.353 "uuid": "fd802e6e-ca23-552e-9a67-c64265905c78", 00:20:59.353 "is_configured": true, 00:20:59.353 "data_offset": 0, 00:20:59.353 "data_size": 65536 00:20:59.353 }, 00:20:59.353 { 00:20:59.353 "name": "BaseBdev2", 00:20:59.353 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:20:59.353 "is_configured": true, 00:20:59.353 "data_offset": 0, 00:20:59.353 "data_size": 65536 00:20:59.353 }, 00:20:59.353 { 00:20:59.353 "name": "BaseBdev3", 00:20:59.353 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:20:59.353 "is_configured": true, 00:20:59.353 "data_offset": 0, 00:20:59.353 "data_size": 65536 00:20:59.353 }, 00:20:59.353 { 00:20:59.353 "name": "BaseBdev4", 00:20:59.353 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:20:59.353 "is_configured": true, 00:20:59.353 "data_offset": 0, 00:20:59.353 "data_size": 65536 00:20:59.353 } 00:20:59.353 ] 00:20:59.353 }' 00:20:59.353 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:59.353 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:59.353 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:59.353 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:59.353 07:17:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:00.288 [2024-11-20 07:17:57.352605] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:00.288 [2024-11-20 07:17:57.352724] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:00.288 [2024-11-20 07:17:57.352799] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.288 "name": "raid_bdev1", 00:21:00.288 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:21:00.288 "strip_size_kb": 64, 00:21:00.288 "state": "online", 00:21:00.288 "raid_level": "raid5f", 00:21:00.288 "superblock": false, 00:21:00.288 "num_base_bdevs": 4, 00:21:00.288 "num_base_bdevs_discovered": 4, 00:21:00.288 "num_base_bdevs_operational": 4, 00:21:00.288 "base_bdevs_list": [ 00:21:00.288 { 00:21:00.288 "name": "spare", 00:21:00.288 "uuid": "fd802e6e-ca23-552e-9a67-c64265905c78", 00:21:00.288 "is_configured": true, 00:21:00.288 "data_offset": 0, 00:21:00.288 "data_size": 65536 00:21:00.288 }, 00:21:00.288 { 00:21:00.288 "name": "BaseBdev2", 00:21:00.288 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:21:00.288 "is_configured": true, 00:21:00.288 "data_offset": 0, 00:21:00.288 "data_size": 65536 00:21:00.288 }, 00:21:00.288 { 00:21:00.288 "name": "BaseBdev3", 00:21:00.288 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:21:00.288 "is_configured": true, 00:21:00.288 "data_offset": 0, 00:21:00.288 "data_size": 65536 00:21:00.288 }, 00:21:00.288 { 00:21:00.288 "name": "BaseBdev4", 00:21:00.288 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:21:00.288 "is_configured": true, 00:21:00.288 "data_offset": 0, 00:21:00.288 "data_size": 65536 00:21:00.288 } 00:21:00.288 ] 00:21:00.288 }' 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.288 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:00.289 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.547 "name": "raid_bdev1", 00:21:00.547 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:21:00.547 "strip_size_kb": 64, 00:21:00.547 "state": "online", 00:21:00.547 "raid_level": "raid5f", 00:21:00.547 "superblock": false, 00:21:00.547 "num_base_bdevs": 4, 00:21:00.547 "num_base_bdevs_discovered": 4, 00:21:00.547 "num_base_bdevs_operational": 4, 00:21:00.547 "base_bdevs_list": [ 00:21:00.547 { 00:21:00.547 "name": "spare", 00:21:00.547 "uuid": "fd802e6e-ca23-552e-9a67-c64265905c78", 00:21:00.547 "is_configured": true, 00:21:00.547 "data_offset": 0, 00:21:00.547 "data_size": 65536 00:21:00.547 }, 00:21:00.547 { 00:21:00.547 "name": "BaseBdev2", 00:21:00.547 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:21:00.547 "is_configured": true, 00:21:00.547 "data_offset": 0, 00:21:00.547 "data_size": 65536 00:21:00.547 }, 00:21:00.547 { 00:21:00.547 "name": "BaseBdev3", 00:21:00.547 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:21:00.547 "is_configured": true, 00:21:00.547 "data_offset": 0, 00:21:00.547 "data_size": 65536 00:21:00.547 }, 00:21:00.547 { 00:21:00.547 "name": "BaseBdev4", 00:21:00.547 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:21:00.547 "is_configured": true, 00:21:00.547 "data_offset": 0, 00:21:00.547 "data_size": 65536 00:21:00.547 } 00:21:00.547 ] 00:21:00.547 }' 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:00.547 "name": "raid_bdev1", 00:21:00.547 "uuid": "4e12eacf-85b8-4dee-b7e4-9dce255c0394", 00:21:00.547 "strip_size_kb": 64, 00:21:00.547 "state": "online", 00:21:00.547 "raid_level": "raid5f", 00:21:00.547 "superblock": false, 00:21:00.547 "num_base_bdevs": 4, 00:21:00.547 "num_base_bdevs_discovered": 4, 00:21:00.547 "num_base_bdevs_operational": 4, 00:21:00.547 "base_bdevs_list": [ 00:21:00.547 { 00:21:00.547 "name": "spare", 00:21:00.547 "uuid": "fd802e6e-ca23-552e-9a67-c64265905c78", 00:21:00.547 "is_configured": true, 00:21:00.547 "data_offset": 0, 00:21:00.547 "data_size": 65536 00:21:00.547 }, 00:21:00.547 { 00:21:00.547 "name": "BaseBdev2", 00:21:00.547 "uuid": "b5256463-af34-5eb8-bafd-ddfdb0292f25", 00:21:00.547 "is_configured": true, 00:21:00.547 "data_offset": 0, 00:21:00.547 "data_size": 65536 00:21:00.547 }, 00:21:00.547 { 00:21:00.547 "name": "BaseBdev3", 00:21:00.547 "uuid": "c61b1e9f-3e45-561f-b86d-2c54e14d63c6", 00:21:00.547 "is_configured": true, 00:21:00.547 "data_offset": 0, 00:21:00.547 "data_size": 65536 00:21:00.547 }, 00:21:00.547 { 00:21:00.547 "name": "BaseBdev4", 00:21:00.547 "uuid": "8655881d-9536-5f30-a5a2-7697a485230b", 00:21:00.547 "is_configured": true, 00:21:00.547 "data_offset": 0, 00:21:00.547 "data_size": 65536 00:21:00.547 } 00:21:00.547 ] 00:21:00.547 }' 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:00.547 07:17:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.112 [2024-11-20 07:17:58.276083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:01.112 [2024-11-20 07:17:58.276351] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:01.112 [2024-11-20 07:17:58.276498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:01.112 [2024-11-20 07:17:58.276626] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:01.112 [2024-11-20 07:17:58.276645] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.112 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:01.370 /dev/nbd0 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.370 1+0 records in 00:21:01.370 1+0 records out 00:21:01.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479314 s, 8.5 MB/s 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.370 07:17:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:01.936 /dev/nbd1 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:01.936 1+0 records in 00:21:01.936 1+0 records out 00:21:01.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491896 s, 8.3 MB/s 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:01.936 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84938 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84938 ']' 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84938 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.502 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84938 00:21:02.759 killing process with pid 84938 00:21:02.759 Received shutdown signal, test time was about 60.000000 seconds 00:21:02.759 00:21:02.759 Latency(us) 00:21:02.759 [2024-11-20T07:18:00.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.760 [2024-11-20T07:18:00.080Z] =================================================================================================================== 00:21:02.760 [2024-11-20T07:18:00.080Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.760 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:02.760 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:02.760 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84938' 00:21:02.760 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84938 00:21:02.760 07:17:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84938 00:21:02.760 [2024-11-20 07:17:59.824498] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:03.073 [2024-11-20 07:18:00.334793] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:21:04.449 00:21:04.449 real 0m20.492s 00:21:04.449 user 0m25.457s 00:21:04.449 sys 0m2.337s 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:21:04.449 ************************************ 00:21:04.449 END TEST raid5f_rebuild_test 00:21:04.449 ************************************ 00:21:04.449 07:18:01 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:21:04.449 07:18:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:04.449 07:18:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.449 07:18:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:04.449 ************************************ 00:21:04.449 START TEST raid5f_rebuild_test_sb 00:21:04.449 ************************************ 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85448 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85448 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85448 ']' 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:04.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:04.449 07:18:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:04.449 [2024-11-20 07:18:01.697798] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:21:04.449 [2024-11-20 07:18:01.698097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85448 ] 00:21:04.449 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:04.449 Zero copy mechanism will not be used. 00:21:04.707 [2024-11-20 07:18:01.888163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.965 [2024-11-20 07:18:02.037079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.965 [2024-11-20 07:18:02.276643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:04.965 [2024-11-20 07:18:02.276735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.531 BaseBdev1_malloc 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.531 [2024-11-20 07:18:02.731821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:05.531 [2024-11-20 07:18:02.731951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.531 [2024-11-20 07:18:02.731989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:05.531 [2024-11-20 07:18:02.732009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.531 [2024-11-20 07:18:02.734975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.531 [2024-11-20 07:18:02.735027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:05.531 BaseBdev1 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.531 BaseBdev2_malloc 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.531 [2024-11-20 07:18:02.784801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:05.531 [2024-11-20 07:18:02.785205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.531 [2024-11-20 07:18:02.785264] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:05.531 [2024-11-20 07:18:02.785287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.531 [2024-11-20 07:18:02.788379] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.531 [2024-11-20 07:18:02.788563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:05.531 BaseBdev2 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.531 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.789 BaseBdev3_malloc 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.789 [2024-11-20 07:18:02.866366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:05.789 [2024-11-20 07:18:02.866487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.789 [2024-11-20 07:18:02.866532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:05.789 [2024-11-20 07:18:02.866557] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.789 [2024-11-20 07:18:02.870238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.789 [2024-11-20 07:18:02.870458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:05.789 BaseBdev3 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.789 BaseBdev4_malloc 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.789 [2024-11-20 07:18:02.922617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:05.789 [2024-11-20 07:18:02.922700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.789 [2024-11-20 07:18:02.922733] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:05.789 [2024-11-20 07:18:02.922753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.789 [2024-11-20 07:18:02.925701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.789 [2024-11-20 07:18:02.925927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:05.789 BaseBdev4 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.789 spare_malloc 00:21:05.789 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.790 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:05.790 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.790 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.790 spare_delay 00:21:05.790 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.790 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:05.790 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.790 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.790 [2024-11-20 07:18:02.987726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:05.790 [2024-11-20 07:18:02.987815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.790 [2024-11-20 07:18:02.987850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:05.790 [2024-11-20 07:18:02.987889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.790 [2024-11-20 07:18:02.990821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.790 [2024-11-20 07:18:02.990899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:05.790 spare 00:21:05.790 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.790 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:21:05.790 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.790 07:18:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.790 [2024-11-20 07:18:02.995875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:05.790 [2024-11-20 07:18:02.998395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:05.790 [2024-11-20 07:18:02.998487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:05.790 [2024-11-20 07:18:02.998570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:05.790 [2024-11-20 07:18:02.998834] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:05.790 [2024-11-20 07:18:02.998860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:05.790 [2024-11-20 07:18:02.999219] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:05.790 [2024-11-20 07:18:03.006239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:05.790 [2024-11-20 07:18:03.006404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:05.790 [2024-11-20 07:18:03.006908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:05.790 "name": "raid_bdev1", 00:21:05.790 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:05.790 "strip_size_kb": 64, 00:21:05.790 "state": "online", 00:21:05.790 "raid_level": "raid5f", 00:21:05.790 "superblock": true, 00:21:05.790 "num_base_bdevs": 4, 00:21:05.790 "num_base_bdevs_discovered": 4, 00:21:05.790 "num_base_bdevs_operational": 4, 00:21:05.790 "base_bdevs_list": [ 00:21:05.790 { 00:21:05.790 "name": "BaseBdev1", 00:21:05.790 "uuid": "83ba3c63-b501-52e9-bb6c-1edf6a38748b", 00:21:05.790 "is_configured": true, 00:21:05.790 "data_offset": 2048, 00:21:05.790 "data_size": 63488 00:21:05.790 }, 00:21:05.790 { 00:21:05.790 "name": "BaseBdev2", 00:21:05.790 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:05.790 "is_configured": true, 00:21:05.790 "data_offset": 2048, 00:21:05.790 "data_size": 63488 00:21:05.790 }, 00:21:05.790 { 00:21:05.790 "name": "BaseBdev3", 00:21:05.790 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:05.790 "is_configured": true, 00:21:05.790 "data_offset": 2048, 00:21:05.790 "data_size": 63488 00:21:05.790 }, 00:21:05.790 { 00:21:05.790 "name": "BaseBdev4", 00:21:05.790 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:05.790 "is_configured": true, 00:21:05.790 "data_offset": 2048, 00:21:05.790 "data_size": 63488 00:21:05.790 } 00:21:05.790 ] 00:21:05.790 }' 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:05.790 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.356 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:06.356 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:06.356 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.357 [2024-11-20 07:18:03.575098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.357 07:18:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:06.923 [2024-11-20 07:18:03.994905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:06.923 /dev/nbd0 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:06.923 1+0 records in 00:21:06.923 1+0 records out 00:21:06.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450828 s, 9.1 MB/s 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:21:06.923 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:21:07.490 496+0 records in 00:21:07.490 496+0 records out 00:21:07.490 97517568 bytes (98 MB, 93 MiB) copied, 0.650396 s, 150 MB/s 00:21:07.490 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:07.490 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:07.490 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:07.490 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:07.490 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:07.490 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:07.490 07:18:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:08.056 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:08.056 [2024-11-20 07:18:05.071838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:08.056 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:08.056 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:08.056 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:08.056 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:08.056 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:08.056 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:08.056 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:08.056 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:08.056 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.057 [2024-11-20 07:18:05.087582] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:08.057 "name": "raid_bdev1", 00:21:08.057 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:08.057 "strip_size_kb": 64, 00:21:08.057 "state": "online", 00:21:08.057 "raid_level": "raid5f", 00:21:08.057 "superblock": true, 00:21:08.057 "num_base_bdevs": 4, 00:21:08.057 "num_base_bdevs_discovered": 3, 00:21:08.057 "num_base_bdevs_operational": 3, 00:21:08.057 "base_bdevs_list": [ 00:21:08.057 { 00:21:08.057 "name": null, 00:21:08.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.057 "is_configured": false, 00:21:08.057 "data_offset": 0, 00:21:08.057 "data_size": 63488 00:21:08.057 }, 00:21:08.057 { 00:21:08.057 "name": "BaseBdev2", 00:21:08.057 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:08.057 "is_configured": true, 00:21:08.057 "data_offset": 2048, 00:21:08.057 "data_size": 63488 00:21:08.057 }, 00:21:08.057 { 00:21:08.057 "name": "BaseBdev3", 00:21:08.057 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:08.057 "is_configured": true, 00:21:08.057 "data_offset": 2048, 00:21:08.057 "data_size": 63488 00:21:08.057 }, 00:21:08.057 { 00:21:08.057 "name": "BaseBdev4", 00:21:08.057 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:08.057 "is_configured": true, 00:21:08.057 "data_offset": 2048, 00:21:08.057 "data_size": 63488 00:21:08.057 } 00:21:08.057 ] 00:21:08.057 }' 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:08.057 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.315 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:08.315 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.315 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:08.315 [2024-11-20 07:18:05.615683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:08.315 [2024-11-20 07:18:05.630151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:21:08.315 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.315 07:18:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:08.574 [2024-11-20 07:18:05.639463] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:09.508 "name": "raid_bdev1", 00:21:09.508 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:09.508 "strip_size_kb": 64, 00:21:09.508 "state": "online", 00:21:09.508 "raid_level": "raid5f", 00:21:09.508 "superblock": true, 00:21:09.508 "num_base_bdevs": 4, 00:21:09.508 "num_base_bdevs_discovered": 4, 00:21:09.508 "num_base_bdevs_operational": 4, 00:21:09.508 "process": { 00:21:09.508 "type": "rebuild", 00:21:09.508 "target": "spare", 00:21:09.508 "progress": { 00:21:09.508 "blocks": 17280, 00:21:09.508 "percent": 9 00:21:09.508 } 00:21:09.508 }, 00:21:09.508 "base_bdevs_list": [ 00:21:09.508 { 00:21:09.508 "name": "spare", 00:21:09.508 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:09.508 "is_configured": true, 00:21:09.508 "data_offset": 2048, 00:21:09.508 "data_size": 63488 00:21:09.508 }, 00:21:09.508 { 00:21:09.508 "name": "BaseBdev2", 00:21:09.508 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:09.508 "is_configured": true, 00:21:09.508 "data_offset": 2048, 00:21:09.508 "data_size": 63488 00:21:09.508 }, 00:21:09.508 { 00:21:09.508 "name": "BaseBdev3", 00:21:09.508 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:09.508 "is_configured": true, 00:21:09.508 "data_offset": 2048, 00:21:09.508 "data_size": 63488 00:21:09.508 }, 00:21:09.508 { 00:21:09.508 "name": "BaseBdev4", 00:21:09.508 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:09.508 "is_configured": true, 00:21:09.508 "data_offset": 2048, 00:21:09.508 "data_size": 63488 00:21:09.508 } 00:21:09.508 ] 00:21:09.508 }' 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.508 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.508 [2024-11-20 07:18:06.785088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:09.767 [2024-11-20 07:18:06.853322] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:09.767 [2024-11-20 07:18:06.853455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.767 [2024-11-20 07:18:06.853485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:09.767 [2024-11-20 07:18:06.853503] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.767 "name": "raid_bdev1", 00:21:09.767 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:09.767 "strip_size_kb": 64, 00:21:09.767 "state": "online", 00:21:09.767 "raid_level": "raid5f", 00:21:09.767 "superblock": true, 00:21:09.767 "num_base_bdevs": 4, 00:21:09.767 "num_base_bdevs_discovered": 3, 00:21:09.767 "num_base_bdevs_operational": 3, 00:21:09.767 "base_bdevs_list": [ 00:21:09.767 { 00:21:09.767 "name": null, 00:21:09.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.767 "is_configured": false, 00:21:09.767 "data_offset": 0, 00:21:09.767 "data_size": 63488 00:21:09.767 }, 00:21:09.767 { 00:21:09.767 "name": "BaseBdev2", 00:21:09.767 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:09.767 "is_configured": true, 00:21:09.767 "data_offset": 2048, 00:21:09.767 "data_size": 63488 00:21:09.767 }, 00:21:09.767 { 00:21:09.767 "name": "BaseBdev3", 00:21:09.767 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:09.767 "is_configured": true, 00:21:09.767 "data_offset": 2048, 00:21:09.767 "data_size": 63488 00:21:09.767 }, 00:21:09.767 { 00:21:09.767 "name": "BaseBdev4", 00:21:09.767 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:09.767 "is_configured": true, 00:21:09.767 "data_offset": 2048, 00:21:09.767 "data_size": 63488 00:21:09.767 } 00:21:09.767 ] 00:21:09.767 }' 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.767 07:18:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.393 "name": "raid_bdev1", 00:21:10.393 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:10.393 "strip_size_kb": 64, 00:21:10.393 "state": "online", 00:21:10.393 "raid_level": "raid5f", 00:21:10.393 "superblock": true, 00:21:10.393 "num_base_bdevs": 4, 00:21:10.393 "num_base_bdevs_discovered": 3, 00:21:10.393 "num_base_bdevs_operational": 3, 00:21:10.393 "base_bdevs_list": [ 00:21:10.393 { 00:21:10.393 "name": null, 00:21:10.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.393 "is_configured": false, 00:21:10.393 "data_offset": 0, 00:21:10.393 "data_size": 63488 00:21:10.393 }, 00:21:10.393 { 00:21:10.393 "name": "BaseBdev2", 00:21:10.393 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:10.393 "is_configured": true, 00:21:10.393 "data_offset": 2048, 00:21:10.393 "data_size": 63488 00:21:10.393 }, 00:21:10.393 { 00:21:10.393 "name": "BaseBdev3", 00:21:10.393 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:10.393 "is_configured": true, 00:21:10.393 "data_offset": 2048, 00:21:10.393 "data_size": 63488 00:21:10.393 }, 00:21:10.393 { 00:21:10.393 "name": "BaseBdev4", 00:21:10.393 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:10.393 "is_configured": true, 00:21:10.393 "data_offset": 2048, 00:21:10.393 "data_size": 63488 00:21:10.393 } 00:21:10.393 ] 00:21:10.393 }' 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:10.393 [2024-11-20 07:18:07.557335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:10.393 [2024-11-20 07:18:07.570742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.393 07:18:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:10.393 [2024-11-20 07:18:07.579604] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:11.327 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.327 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:11.327 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:11.327 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:11.327 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:11.327 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.327 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.327 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.327 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.327 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.327 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:11.327 "name": "raid_bdev1", 00:21:11.327 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:11.327 "strip_size_kb": 64, 00:21:11.327 "state": "online", 00:21:11.327 "raid_level": "raid5f", 00:21:11.327 "superblock": true, 00:21:11.327 "num_base_bdevs": 4, 00:21:11.327 "num_base_bdevs_discovered": 4, 00:21:11.327 "num_base_bdevs_operational": 4, 00:21:11.327 "process": { 00:21:11.327 "type": "rebuild", 00:21:11.327 "target": "spare", 00:21:11.327 "progress": { 00:21:11.327 "blocks": 17280, 00:21:11.327 "percent": 9 00:21:11.327 } 00:21:11.327 }, 00:21:11.327 "base_bdevs_list": [ 00:21:11.327 { 00:21:11.327 "name": "spare", 00:21:11.327 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:11.327 "is_configured": true, 00:21:11.327 "data_offset": 2048, 00:21:11.327 "data_size": 63488 00:21:11.327 }, 00:21:11.327 { 00:21:11.327 "name": "BaseBdev2", 00:21:11.327 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:11.327 "is_configured": true, 00:21:11.327 "data_offset": 2048, 00:21:11.327 "data_size": 63488 00:21:11.327 }, 00:21:11.327 { 00:21:11.327 "name": "BaseBdev3", 00:21:11.327 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:11.327 "is_configured": true, 00:21:11.327 "data_offset": 2048, 00:21:11.327 "data_size": 63488 00:21:11.327 }, 00:21:11.327 { 00:21:11.327 "name": "BaseBdev4", 00:21:11.327 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:11.327 "is_configured": true, 00:21:11.327 "data_offset": 2048, 00:21:11.327 "data_size": 63488 00:21:11.327 } 00:21:11.327 ] 00:21:11.327 }' 00:21:11.327 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:11.586 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=693 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:11.586 "name": "raid_bdev1", 00:21:11.586 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:11.586 "strip_size_kb": 64, 00:21:11.586 "state": "online", 00:21:11.586 "raid_level": "raid5f", 00:21:11.586 "superblock": true, 00:21:11.586 "num_base_bdevs": 4, 00:21:11.586 "num_base_bdevs_discovered": 4, 00:21:11.586 "num_base_bdevs_operational": 4, 00:21:11.586 "process": { 00:21:11.586 "type": "rebuild", 00:21:11.586 "target": "spare", 00:21:11.586 "progress": { 00:21:11.586 "blocks": 21120, 00:21:11.586 "percent": 11 00:21:11.586 } 00:21:11.586 }, 00:21:11.586 "base_bdevs_list": [ 00:21:11.586 { 00:21:11.586 "name": "spare", 00:21:11.586 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:11.586 "is_configured": true, 00:21:11.586 "data_offset": 2048, 00:21:11.586 "data_size": 63488 00:21:11.586 }, 00:21:11.586 { 00:21:11.586 "name": "BaseBdev2", 00:21:11.586 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:11.586 "is_configured": true, 00:21:11.586 "data_offset": 2048, 00:21:11.586 "data_size": 63488 00:21:11.586 }, 00:21:11.586 { 00:21:11.586 "name": "BaseBdev3", 00:21:11.586 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:11.586 "is_configured": true, 00:21:11.586 "data_offset": 2048, 00:21:11.586 "data_size": 63488 00:21:11.586 }, 00:21:11.586 { 00:21:11.586 "name": "BaseBdev4", 00:21:11.586 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:11.586 "is_configured": true, 00:21:11.586 "data_offset": 2048, 00:21:11.586 "data_size": 63488 00:21:11.586 } 00:21:11.586 ] 00:21:11.586 }' 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:11.586 07:18:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:12.966 "name": "raid_bdev1", 00:21:12.966 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:12.966 "strip_size_kb": 64, 00:21:12.966 "state": "online", 00:21:12.966 "raid_level": "raid5f", 00:21:12.966 "superblock": true, 00:21:12.966 "num_base_bdevs": 4, 00:21:12.966 "num_base_bdevs_discovered": 4, 00:21:12.966 "num_base_bdevs_operational": 4, 00:21:12.966 "process": { 00:21:12.966 "type": "rebuild", 00:21:12.966 "target": "spare", 00:21:12.966 "progress": { 00:21:12.966 "blocks": 42240, 00:21:12.966 "percent": 22 00:21:12.966 } 00:21:12.966 }, 00:21:12.966 "base_bdevs_list": [ 00:21:12.966 { 00:21:12.966 "name": "spare", 00:21:12.966 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:12.966 "is_configured": true, 00:21:12.966 "data_offset": 2048, 00:21:12.966 "data_size": 63488 00:21:12.966 }, 00:21:12.966 { 00:21:12.966 "name": "BaseBdev2", 00:21:12.966 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:12.966 "is_configured": true, 00:21:12.966 "data_offset": 2048, 00:21:12.966 "data_size": 63488 00:21:12.966 }, 00:21:12.966 { 00:21:12.966 "name": "BaseBdev3", 00:21:12.966 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:12.966 "is_configured": true, 00:21:12.966 "data_offset": 2048, 00:21:12.966 "data_size": 63488 00:21:12.966 }, 00:21:12.966 { 00:21:12.966 "name": "BaseBdev4", 00:21:12.966 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:12.966 "is_configured": true, 00:21:12.966 "data_offset": 2048, 00:21:12.966 "data_size": 63488 00:21:12.966 } 00:21:12.966 ] 00:21:12.966 }' 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:12.966 07:18:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:12.966 07:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:12.966 07:18:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:13.904 "name": "raid_bdev1", 00:21:13.904 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:13.904 "strip_size_kb": 64, 00:21:13.904 "state": "online", 00:21:13.904 "raid_level": "raid5f", 00:21:13.904 "superblock": true, 00:21:13.904 "num_base_bdevs": 4, 00:21:13.904 "num_base_bdevs_discovered": 4, 00:21:13.904 "num_base_bdevs_operational": 4, 00:21:13.904 "process": { 00:21:13.904 "type": "rebuild", 00:21:13.904 "target": "spare", 00:21:13.904 "progress": { 00:21:13.904 "blocks": 65280, 00:21:13.904 "percent": 34 00:21:13.904 } 00:21:13.904 }, 00:21:13.904 "base_bdevs_list": [ 00:21:13.904 { 00:21:13.904 "name": "spare", 00:21:13.904 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:13.904 "is_configured": true, 00:21:13.904 "data_offset": 2048, 00:21:13.904 "data_size": 63488 00:21:13.904 }, 00:21:13.904 { 00:21:13.904 "name": "BaseBdev2", 00:21:13.904 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:13.904 "is_configured": true, 00:21:13.904 "data_offset": 2048, 00:21:13.904 "data_size": 63488 00:21:13.904 }, 00:21:13.904 { 00:21:13.904 "name": "BaseBdev3", 00:21:13.904 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:13.904 "is_configured": true, 00:21:13.904 "data_offset": 2048, 00:21:13.904 "data_size": 63488 00:21:13.904 }, 00:21:13.904 { 00:21:13.904 "name": "BaseBdev4", 00:21:13.904 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:13.904 "is_configured": true, 00:21:13.904 "data_offset": 2048, 00:21:13.904 "data_size": 63488 00:21:13.904 } 00:21:13.904 ] 00:21:13.904 }' 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.904 07:18:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:15.282 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:15.282 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.282 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:15.282 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:15.282 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:15.283 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:15.283 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:15.283 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.283 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.283 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:15.283 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.283 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:15.283 "name": "raid_bdev1", 00:21:15.283 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:15.283 "strip_size_kb": 64, 00:21:15.283 "state": "online", 00:21:15.283 "raid_level": "raid5f", 00:21:15.283 "superblock": true, 00:21:15.283 "num_base_bdevs": 4, 00:21:15.283 "num_base_bdevs_discovered": 4, 00:21:15.283 "num_base_bdevs_operational": 4, 00:21:15.283 "process": { 00:21:15.283 "type": "rebuild", 00:21:15.283 "target": "spare", 00:21:15.283 "progress": { 00:21:15.283 "blocks": 86400, 00:21:15.283 "percent": 45 00:21:15.283 } 00:21:15.283 }, 00:21:15.283 "base_bdevs_list": [ 00:21:15.283 { 00:21:15.283 "name": "spare", 00:21:15.283 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:15.283 "is_configured": true, 00:21:15.283 "data_offset": 2048, 00:21:15.283 "data_size": 63488 00:21:15.283 }, 00:21:15.283 { 00:21:15.283 "name": "BaseBdev2", 00:21:15.283 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:15.283 "is_configured": true, 00:21:15.283 "data_offset": 2048, 00:21:15.283 "data_size": 63488 00:21:15.283 }, 00:21:15.283 { 00:21:15.283 "name": "BaseBdev3", 00:21:15.283 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:15.283 "is_configured": true, 00:21:15.283 "data_offset": 2048, 00:21:15.283 "data_size": 63488 00:21:15.283 }, 00:21:15.283 { 00:21:15.283 "name": "BaseBdev4", 00:21:15.283 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:15.283 "is_configured": true, 00:21:15.283 "data_offset": 2048, 00:21:15.283 "data_size": 63488 00:21:15.283 } 00:21:15.283 ] 00:21:15.283 }' 00:21:15.283 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:15.283 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.283 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:15.283 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.283 07:18:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:16.218 "name": "raid_bdev1", 00:21:16.218 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:16.218 "strip_size_kb": 64, 00:21:16.218 "state": "online", 00:21:16.218 "raid_level": "raid5f", 00:21:16.218 "superblock": true, 00:21:16.218 "num_base_bdevs": 4, 00:21:16.218 "num_base_bdevs_discovered": 4, 00:21:16.218 "num_base_bdevs_operational": 4, 00:21:16.218 "process": { 00:21:16.218 "type": "rebuild", 00:21:16.218 "target": "spare", 00:21:16.218 "progress": { 00:21:16.218 "blocks": 109440, 00:21:16.218 "percent": 57 00:21:16.218 } 00:21:16.218 }, 00:21:16.218 "base_bdevs_list": [ 00:21:16.218 { 00:21:16.218 "name": "spare", 00:21:16.218 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:16.218 "is_configured": true, 00:21:16.218 "data_offset": 2048, 00:21:16.218 "data_size": 63488 00:21:16.218 }, 00:21:16.218 { 00:21:16.218 "name": "BaseBdev2", 00:21:16.218 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:16.218 "is_configured": true, 00:21:16.218 "data_offset": 2048, 00:21:16.218 "data_size": 63488 00:21:16.218 }, 00:21:16.218 { 00:21:16.218 "name": "BaseBdev3", 00:21:16.218 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:16.218 "is_configured": true, 00:21:16.218 "data_offset": 2048, 00:21:16.218 "data_size": 63488 00:21:16.218 }, 00:21:16.218 { 00:21:16.218 "name": "BaseBdev4", 00:21:16.218 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:16.218 "is_configured": true, 00:21:16.218 "data_offset": 2048, 00:21:16.218 "data_size": 63488 00:21:16.218 } 00:21:16.218 ] 00:21:16.218 }' 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.218 07:18:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:17.650 "name": "raid_bdev1", 00:21:17.650 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:17.650 "strip_size_kb": 64, 00:21:17.650 "state": "online", 00:21:17.650 "raid_level": "raid5f", 00:21:17.650 "superblock": true, 00:21:17.650 "num_base_bdevs": 4, 00:21:17.650 "num_base_bdevs_discovered": 4, 00:21:17.650 "num_base_bdevs_operational": 4, 00:21:17.650 "process": { 00:21:17.650 "type": "rebuild", 00:21:17.650 "target": "spare", 00:21:17.650 "progress": { 00:21:17.650 "blocks": 130560, 00:21:17.650 "percent": 68 00:21:17.650 } 00:21:17.650 }, 00:21:17.650 "base_bdevs_list": [ 00:21:17.650 { 00:21:17.650 "name": "spare", 00:21:17.650 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:17.650 "is_configured": true, 00:21:17.650 "data_offset": 2048, 00:21:17.650 "data_size": 63488 00:21:17.650 }, 00:21:17.650 { 00:21:17.650 "name": "BaseBdev2", 00:21:17.650 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:17.650 "is_configured": true, 00:21:17.650 "data_offset": 2048, 00:21:17.650 "data_size": 63488 00:21:17.650 }, 00:21:17.650 { 00:21:17.650 "name": "BaseBdev3", 00:21:17.650 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:17.650 "is_configured": true, 00:21:17.650 "data_offset": 2048, 00:21:17.650 "data_size": 63488 00:21:17.650 }, 00:21:17.650 { 00:21:17.650 "name": "BaseBdev4", 00:21:17.650 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:17.650 "is_configured": true, 00:21:17.650 "data_offset": 2048, 00:21:17.650 "data_size": 63488 00:21:17.650 } 00:21:17.650 ] 00:21:17.650 }' 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:17.650 07:18:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:18.588 "name": "raid_bdev1", 00:21:18.588 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:18.588 "strip_size_kb": 64, 00:21:18.588 "state": "online", 00:21:18.588 "raid_level": "raid5f", 00:21:18.588 "superblock": true, 00:21:18.588 "num_base_bdevs": 4, 00:21:18.588 "num_base_bdevs_discovered": 4, 00:21:18.588 "num_base_bdevs_operational": 4, 00:21:18.588 "process": { 00:21:18.588 "type": "rebuild", 00:21:18.588 "target": "spare", 00:21:18.588 "progress": { 00:21:18.588 "blocks": 153600, 00:21:18.588 "percent": 80 00:21:18.588 } 00:21:18.588 }, 00:21:18.588 "base_bdevs_list": [ 00:21:18.588 { 00:21:18.588 "name": "spare", 00:21:18.588 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:18.588 "is_configured": true, 00:21:18.588 "data_offset": 2048, 00:21:18.588 "data_size": 63488 00:21:18.588 }, 00:21:18.588 { 00:21:18.588 "name": "BaseBdev2", 00:21:18.588 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:18.588 "is_configured": true, 00:21:18.588 "data_offset": 2048, 00:21:18.588 "data_size": 63488 00:21:18.588 }, 00:21:18.588 { 00:21:18.588 "name": "BaseBdev3", 00:21:18.588 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:18.588 "is_configured": true, 00:21:18.588 "data_offset": 2048, 00:21:18.588 "data_size": 63488 00:21:18.588 }, 00:21:18.588 { 00:21:18.588 "name": "BaseBdev4", 00:21:18.588 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:18.588 "is_configured": true, 00:21:18.588 "data_offset": 2048, 00:21:18.588 "data_size": 63488 00:21:18.588 } 00:21:18.588 ] 00:21:18.588 }' 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:18.588 07:18:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:19.965 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:19.965 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:19.965 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:19.965 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:19.965 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:19.965 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:19.965 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:19.965 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.965 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:19.965 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.965 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.965 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:19.965 "name": "raid_bdev1", 00:21:19.965 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:19.965 "strip_size_kb": 64, 00:21:19.965 "state": "online", 00:21:19.965 "raid_level": "raid5f", 00:21:19.965 "superblock": true, 00:21:19.965 "num_base_bdevs": 4, 00:21:19.965 "num_base_bdevs_discovered": 4, 00:21:19.965 "num_base_bdevs_operational": 4, 00:21:19.965 "process": { 00:21:19.965 "type": "rebuild", 00:21:19.965 "target": "spare", 00:21:19.965 "progress": { 00:21:19.965 "blocks": 174720, 00:21:19.965 "percent": 91 00:21:19.965 } 00:21:19.965 }, 00:21:19.965 "base_bdevs_list": [ 00:21:19.965 { 00:21:19.965 "name": "spare", 00:21:19.965 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:19.965 "is_configured": true, 00:21:19.965 "data_offset": 2048, 00:21:19.965 "data_size": 63488 00:21:19.965 }, 00:21:19.965 { 00:21:19.965 "name": "BaseBdev2", 00:21:19.965 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:19.965 "is_configured": true, 00:21:19.965 "data_offset": 2048, 00:21:19.965 "data_size": 63488 00:21:19.965 }, 00:21:19.965 { 00:21:19.965 "name": "BaseBdev3", 00:21:19.965 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:19.966 "is_configured": true, 00:21:19.966 "data_offset": 2048, 00:21:19.966 "data_size": 63488 00:21:19.966 }, 00:21:19.966 { 00:21:19.966 "name": "BaseBdev4", 00:21:19.966 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:19.966 "is_configured": true, 00:21:19.966 "data_offset": 2048, 00:21:19.966 "data_size": 63488 00:21:19.966 } 00:21:19.966 ] 00:21:19.966 }' 00:21:19.966 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:19.966 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:19.966 07:18:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:19.966 07:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:19.966 07:18:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:20.601 [2024-11-20 07:18:17.694015] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:20.601 [2024-11-20 07:18:17.694155] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:20.601 [2024-11-20 07:18:17.694335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:20.860 "name": "raid_bdev1", 00:21:20.860 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:20.860 "strip_size_kb": 64, 00:21:20.860 "state": "online", 00:21:20.860 "raid_level": "raid5f", 00:21:20.860 "superblock": true, 00:21:20.860 "num_base_bdevs": 4, 00:21:20.860 "num_base_bdevs_discovered": 4, 00:21:20.860 "num_base_bdevs_operational": 4, 00:21:20.860 "base_bdevs_list": [ 00:21:20.860 { 00:21:20.860 "name": "spare", 00:21:20.860 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:20.860 "is_configured": true, 00:21:20.860 "data_offset": 2048, 00:21:20.860 "data_size": 63488 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "name": "BaseBdev2", 00:21:20.860 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:20.860 "is_configured": true, 00:21:20.860 "data_offset": 2048, 00:21:20.860 "data_size": 63488 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "name": "BaseBdev3", 00:21:20.860 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:20.860 "is_configured": true, 00:21:20.860 "data_offset": 2048, 00:21:20.860 "data_size": 63488 00:21:20.860 }, 00:21:20.860 { 00:21:20.860 "name": "BaseBdev4", 00:21:20.860 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:20.860 "is_configured": true, 00:21:20.860 "data_offset": 2048, 00:21:20.860 "data_size": 63488 00:21:20.860 } 00:21:20.860 ] 00:21:20.860 }' 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:20.860 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:21.119 "name": "raid_bdev1", 00:21:21.119 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:21.119 "strip_size_kb": 64, 00:21:21.119 "state": "online", 00:21:21.119 "raid_level": "raid5f", 00:21:21.119 "superblock": true, 00:21:21.119 "num_base_bdevs": 4, 00:21:21.119 "num_base_bdevs_discovered": 4, 00:21:21.119 "num_base_bdevs_operational": 4, 00:21:21.119 "base_bdevs_list": [ 00:21:21.119 { 00:21:21.119 "name": "spare", 00:21:21.119 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:21.119 "is_configured": true, 00:21:21.119 "data_offset": 2048, 00:21:21.119 "data_size": 63488 00:21:21.119 }, 00:21:21.119 { 00:21:21.119 "name": "BaseBdev2", 00:21:21.119 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:21.119 "is_configured": true, 00:21:21.119 "data_offset": 2048, 00:21:21.119 "data_size": 63488 00:21:21.119 }, 00:21:21.119 { 00:21:21.119 "name": "BaseBdev3", 00:21:21.119 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:21.119 "is_configured": true, 00:21:21.119 "data_offset": 2048, 00:21:21.119 "data_size": 63488 00:21:21.119 }, 00:21:21.119 { 00:21:21.119 "name": "BaseBdev4", 00:21:21.119 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:21.119 "is_configured": true, 00:21:21.119 "data_offset": 2048, 00:21:21.119 "data_size": 63488 00:21:21.119 } 00:21:21.119 ] 00:21:21.119 }' 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.119 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:21.119 "name": "raid_bdev1", 00:21:21.119 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:21.119 "strip_size_kb": 64, 00:21:21.119 "state": "online", 00:21:21.119 "raid_level": "raid5f", 00:21:21.119 "superblock": true, 00:21:21.119 "num_base_bdevs": 4, 00:21:21.119 "num_base_bdevs_discovered": 4, 00:21:21.119 "num_base_bdevs_operational": 4, 00:21:21.119 "base_bdevs_list": [ 00:21:21.119 { 00:21:21.119 "name": "spare", 00:21:21.119 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:21.120 "is_configured": true, 00:21:21.120 "data_offset": 2048, 00:21:21.120 "data_size": 63488 00:21:21.120 }, 00:21:21.120 { 00:21:21.120 "name": "BaseBdev2", 00:21:21.120 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:21.120 "is_configured": true, 00:21:21.120 "data_offset": 2048, 00:21:21.120 "data_size": 63488 00:21:21.120 }, 00:21:21.120 { 00:21:21.120 "name": "BaseBdev3", 00:21:21.120 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:21.120 "is_configured": true, 00:21:21.120 "data_offset": 2048, 00:21:21.120 "data_size": 63488 00:21:21.120 }, 00:21:21.120 { 00:21:21.120 "name": "BaseBdev4", 00:21:21.120 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:21.120 "is_configured": true, 00:21:21.120 "data_offset": 2048, 00:21:21.120 "data_size": 63488 00:21:21.120 } 00:21:21.120 ] 00:21:21.120 }' 00:21:21.120 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:21.120 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.687 [2024-11-20 07:18:18.873585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:21.687 [2024-11-20 07:18:18.873630] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:21.687 [2024-11-20 07:18:18.873734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:21.687 [2024-11-20 07:18:18.873859] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:21.687 [2024-11-20 07:18:18.873917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:21.687 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:21.688 07:18:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:21.946 /dev/nbd0 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:21.946 1+0 records in 00:21:21.946 1+0 records out 00:21:21.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262999 s, 15.6 MB/s 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:21.946 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:22.204 /dev/nbd1 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:22.461 1+0 records in 00:21:22.461 1+0 records out 00:21:22.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295212 s, 13.9 MB/s 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:22.461 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:22.462 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:22.462 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:22.462 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:22.462 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:22.462 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:22.462 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:21:22.462 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:22.462 07:18:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:22.720 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:22.720 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:22.720 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:22.720 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:22.720 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:22.720 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:22.720 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:22.720 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:22.720 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:22.720 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:23.286 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:23.286 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:23.286 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:23.286 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:23.286 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:23.286 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:23.286 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:21:23.286 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:21:23.286 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:23.286 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:23.286 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.286 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.287 [2024-11-20 07:18:20.359833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:23.287 [2024-11-20 07:18:20.360062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.287 [2024-11-20 07:18:20.360113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:21:23.287 [2024-11-20 07:18:20.360130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.287 [2024-11-20 07:18:20.363071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.287 [2024-11-20 07:18:20.363240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:23.287 [2024-11-20 07:18:20.363412] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:23.287 [2024-11-20 07:18:20.363481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:23.287 [2024-11-20 07:18:20.363629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:23.287 [2024-11-20 07:18:20.363760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:23.287 [2024-11-20 07:18:20.363894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:23.287 spare 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.287 [2024-11-20 07:18:20.464084] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:23.287 [2024-11-20 07:18:20.464146] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:21:23.287 [2024-11-20 07:18:20.464582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:21:23.287 [2024-11-20 07:18:20.471018] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:23.287 [2024-11-20 07:18:20.471209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:23.287 [2024-11-20 07:18:20.471496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:23.287 "name": "raid_bdev1", 00:21:23.287 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:23.287 "strip_size_kb": 64, 00:21:23.287 "state": "online", 00:21:23.287 "raid_level": "raid5f", 00:21:23.287 "superblock": true, 00:21:23.287 "num_base_bdevs": 4, 00:21:23.287 "num_base_bdevs_discovered": 4, 00:21:23.287 "num_base_bdevs_operational": 4, 00:21:23.287 "base_bdevs_list": [ 00:21:23.287 { 00:21:23.287 "name": "spare", 00:21:23.287 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:23.287 "is_configured": true, 00:21:23.287 "data_offset": 2048, 00:21:23.287 "data_size": 63488 00:21:23.287 }, 00:21:23.287 { 00:21:23.287 "name": "BaseBdev2", 00:21:23.287 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:23.287 "is_configured": true, 00:21:23.287 "data_offset": 2048, 00:21:23.287 "data_size": 63488 00:21:23.287 }, 00:21:23.287 { 00:21:23.287 "name": "BaseBdev3", 00:21:23.287 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:23.287 "is_configured": true, 00:21:23.287 "data_offset": 2048, 00:21:23.287 "data_size": 63488 00:21:23.287 }, 00:21:23.287 { 00:21:23.287 "name": "BaseBdev4", 00:21:23.287 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:23.287 "is_configured": true, 00:21:23.287 "data_offset": 2048, 00:21:23.287 "data_size": 63488 00:21:23.287 } 00:21:23.287 ] 00:21:23.287 }' 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:23.287 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.858 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:23.858 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:23.858 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:23.858 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:23.858 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:23.858 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.858 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:23.858 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.858 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.858 07:18:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.858 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:23.858 "name": "raid_bdev1", 00:21:23.858 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:23.858 "strip_size_kb": 64, 00:21:23.858 "state": "online", 00:21:23.858 "raid_level": "raid5f", 00:21:23.858 "superblock": true, 00:21:23.858 "num_base_bdevs": 4, 00:21:23.858 "num_base_bdevs_discovered": 4, 00:21:23.858 "num_base_bdevs_operational": 4, 00:21:23.858 "base_bdevs_list": [ 00:21:23.858 { 00:21:23.858 "name": "spare", 00:21:23.858 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:23.858 "is_configured": true, 00:21:23.858 "data_offset": 2048, 00:21:23.858 "data_size": 63488 00:21:23.858 }, 00:21:23.858 { 00:21:23.858 "name": "BaseBdev2", 00:21:23.858 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:23.858 "is_configured": true, 00:21:23.858 "data_offset": 2048, 00:21:23.858 "data_size": 63488 00:21:23.858 }, 00:21:23.858 { 00:21:23.858 "name": "BaseBdev3", 00:21:23.858 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:23.858 "is_configured": true, 00:21:23.859 "data_offset": 2048, 00:21:23.859 "data_size": 63488 00:21:23.859 }, 00:21:23.859 { 00:21:23.859 "name": "BaseBdev4", 00:21:23.859 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:23.859 "is_configured": true, 00:21:23.859 "data_offset": 2048, 00:21:23.859 "data_size": 63488 00:21:23.859 } 00:21:23.859 ] 00:21:23.859 }' 00:21:23.859 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:23.859 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:23.859 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:23.859 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:23.859 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:23.859 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.859 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.859 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:23.859 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.859 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:23.859 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:23.859 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.859 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:23.859 [2024-11-20 07:18:21.171189] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:24.128 "name": "raid_bdev1", 00:21:24.128 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:24.128 "strip_size_kb": 64, 00:21:24.128 "state": "online", 00:21:24.128 "raid_level": "raid5f", 00:21:24.128 "superblock": true, 00:21:24.128 "num_base_bdevs": 4, 00:21:24.128 "num_base_bdevs_discovered": 3, 00:21:24.128 "num_base_bdevs_operational": 3, 00:21:24.128 "base_bdevs_list": [ 00:21:24.128 { 00:21:24.128 "name": null, 00:21:24.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.128 "is_configured": false, 00:21:24.128 "data_offset": 0, 00:21:24.128 "data_size": 63488 00:21:24.128 }, 00:21:24.128 { 00:21:24.128 "name": "BaseBdev2", 00:21:24.128 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:24.128 "is_configured": true, 00:21:24.128 "data_offset": 2048, 00:21:24.128 "data_size": 63488 00:21:24.128 }, 00:21:24.128 { 00:21:24.128 "name": "BaseBdev3", 00:21:24.128 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:24.128 "is_configured": true, 00:21:24.128 "data_offset": 2048, 00:21:24.128 "data_size": 63488 00:21:24.128 }, 00:21:24.128 { 00:21:24.128 "name": "BaseBdev4", 00:21:24.128 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:24.128 "is_configured": true, 00:21:24.128 "data_offset": 2048, 00:21:24.128 "data_size": 63488 00:21:24.128 } 00:21:24.128 ] 00:21:24.128 }' 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:24.128 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.398 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:24.398 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.398 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:24.398 [2024-11-20 07:18:21.691380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:24.398 [2024-11-20 07:18:21.691631] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:24.398 [2024-11-20 07:18:21.691658] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:24.398 [2024-11-20 07:18:21.691711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:24.398 [2024-11-20 07:18:21.704958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:21:24.398 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.398 07:18:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:24.398 [2024-11-20 07:18:21.713915] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:25.828 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:25.828 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:25.828 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:25.828 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:25.828 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:25.828 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.828 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.828 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.828 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.828 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.828 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:25.828 "name": "raid_bdev1", 00:21:25.828 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:25.828 "strip_size_kb": 64, 00:21:25.828 "state": "online", 00:21:25.828 "raid_level": "raid5f", 00:21:25.828 "superblock": true, 00:21:25.828 "num_base_bdevs": 4, 00:21:25.828 "num_base_bdevs_discovered": 4, 00:21:25.828 "num_base_bdevs_operational": 4, 00:21:25.828 "process": { 00:21:25.828 "type": "rebuild", 00:21:25.828 "target": "spare", 00:21:25.828 "progress": { 00:21:25.828 "blocks": 17280, 00:21:25.828 "percent": 9 00:21:25.828 } 00:21:25.828 }, 00:21:25.828 "base_bdevs_list": [ 00:21:25.828 { 00:21:25.828 "name": "spare", 00:21:25.828 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:25.828 "is_configured": true, 00:21:25.828 "data_offset": 2048, 00:21:25.828 "data_size": 63488 00:21:25.828 }, 00:21:25.828 { 00:21:25.828 "name": "BaseBdev2", 00:21:25.828 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:25.828 "is_configured": true, 00:21:25.828 "data_offset": 2048, 00:21:25.828 "data_size": 63488 00:21:25.828 }, 00:21:25.828 { 00:21:25.828 "name": "BaseBdev3", 00:21:25.828 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:25.828 "is_configured": true, 00:21:25.829 "data_offset": 2048, 00:21:25.829 "data_size": 63488 00:21:25.829 }, 00:21:25.829 { 00:21:25.829 "name": "BaseBdev4", 00:21:25.829 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:25.829 "is_configured": true, 00:21:25.829 "data_offset": 2048, 00:21:25.829 "data_size": 63488 00:21:25.829 } 00:21:25.829 ] 00:21:25.829 }' 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.829 [2024-11-20 07:18:22.879176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:25.829 [2024-11-20 07:18:22.927154] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:25.829 [2024-11-20 07:18:22.927556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.829 [2024-11-20 07:18:22.927607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:25.829 [2024-11-20 07:18:22.927636] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:25.829 07:18:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.829 07:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:25.829 "name": "raid_bdev1", 00:21:25.829 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:25.829 "strip_size_kb": 64, 00:21:25.829 "state": "online", 00:21:25.829 "raid_level": "raid5f", 00:21:25.829 "superblock": true, 00:21:25.829 "num_base_bdevs": 4, 00:21:25.829 "num_base_bdevs_discovered": 3, 00:21:25.829 "num_base_bdevs_operational": 3, 00:21:25.829 "base_bdevs_list": [ 00:21:25.829 { 00:21:25.829 "name": null, 00:21:25.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.829 "is_configured": false, 00:21:25.829 "data_offset": 0, 00:21:25.829 "data_size": 63488 00:21:25.829 }, 00:21:25.829 { 00:21:25.829 "name": "BaseBdev2", 00:21:25.829 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:25.829 "is_configured": true, 00:21:25.829 "data_offset": 2048, 00:21:25.829 "data_size": 63488 00:21:25.829 }, 00:21:25.829 { 00:21:25.829 "name": "BaseBdev3", 00:21:25.829 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:25.829 "is_configured": true, 00:21:25.829 "data_offset": 2048, 00:21:25.829 "data_size": 63488 00:21:25.829 }, 00:21:25.829 { 00:21:25.829 "name": "BaseBdev4", 00:21:25.829 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:25.829 "is_configured": true, 00:21:25.829 "data_offset": 2048, 00:21:25.829 "data_size": 63488 00:21:25.829 } 00:21:25.829 ] 00:21:25.829 }' 00:21:25.829 07:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:25.829 07:18:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.397 07:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:26.397 07:18:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.397 07:18:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:26.397 [2024-11-20 07:18:23.506785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:26.397 [2024-11-20 07:18:23.506886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.397 [2024-11-20 07:18:23.506932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:21:26.397 [2024-11-20 07:18:23.506953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.397 [2024-11-20 07:18:23.507590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.397 [2024-11-20 07:18:23.507622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:26.397 [2024-11-20 07:18:23.507752] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:26.397 [2024-11-20 07:18:23.507783] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:26.397 [2024-11-20 07:18:23.507796] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:26.397 [2024-11-20 07:18:23.507832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:26.397 [2024-11-20 07:18:23.521301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:21:26.397 spare 00:21:26.397 07:18:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.397 07:18:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:26.397 [2024-11-20 07:18:23.530312] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:27.333 "name": "raid_bdev1", 00:21:27.333 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:27.333 "strip_size_kb": 64, 00:21:27.333 "state": "online", 00:21:27.333 "raid_level": "raid5f", 00:21:27.333 "superblock": true, 00:21:27.333 "num_base_bdevs": 4, 00:21:27.333 "num_base_bdevs_discovered": 4, 00:21:27.333 "num_base_bdevs_operational": 4, 00:21:27.333 "process": { 00:21:27.333 "type": "rebuild", 00:21:27.333 "target": "spare", 00:21:27.333 "progress": { 00:21:27.333 "blocks": 17280, 00:21:27.333 "percent": 9 00:21:27.333 } 00:21:27.333 }, 00:21:27.333 "base_bdevs_list": [ 00:21:27.333 { 00:21:27.333 "name": "spare", 00:21:27.333 "uuid": "0dc3d758-3014-580d-b0fb-9a7a4c43e02e", 00:21:27.333 "is_configured": true, 00:21:27.333 "data_offset": 2048, 00:21:27.333 "data_size": 63488 00:21:27.333 }, 00:21:27.333 { 00:21:27.333 "name": "BaseBdev2", 00:21:27.333 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:27.333 "is_configured": true, 00:21:27.333 "data_offset": 2048, 00:21:27.333 "data_size": 63488 00:21:27.333 }, 00:21:27.333 { 00:21:27.333 "name": "BaseBdev3", 00:21:27.333 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:27.333 "is_configured": true, 00:21:27.333 "data_offset": 2048, 00:21:27.333 "data_size": 63488 00:21:27.333 }, 00:21:27.333 { 00:21:27.333 "name": "BaseBdev4", 00:21:27.333 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:27.333 "is_configured": true, 00:21:27.333 "data_offset": 2048, 00:21:27.333 "data_size": 63488 00:21:27.333 } 00:21:27.333 ] 00:21:27.333 }' 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:27.333 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.592 [2024-11-20 07:18:24.687657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.592 [2024-11-20 07:18:24.743540] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:27.592 [2024-11-20 07:18:24.743642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.592 [2024-11-20 07:18:24.743675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:27.592 [2024-11-20 07:18:24.743686] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:27.592 "name": "raid_bdev1", 00:21:27.592 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:27.592 "strip_size_kb": 64, 00:21:27.592 "state": "online", 00:21:27.592 "raid_level": "raid5f", 00:21:27.592 "superblock": true, 00:21:27.592 "num_base_bdevs": 4, 00:21:27.592 "num_base_bdevs_discovered": 3, 00:21:27.592 "num_base_bdevs_operational": 3, 00:21:27.592 "base_bdevs_list": [ 00:21:27.592 { 00:21:27.592 "name": null, 00:21:27.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.592 "is_configured": false, 00:21:27.592 "data_offset": 0, 00:21:27.592 "data_size": 63488 00:21:27.592 }, 00:21:27.592 { 00:21:27.592 "name": "BaseBdev2", 00:21:27.592 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:27.592 "is_configured": true, 00:21:27.592 "data_offset": 2048, 00:21:27.592 "data_size": 63488 00:21:27.592 }, 00:21:27.592 { 00:21:27.592 "name": "BaseBdev3", 00:21:27.592 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:27.592 "is_configured": true, 00:21:27.592 "data_offset": 2048, 00:21:27.592 "data_size": 63488 00:21:27.592 }, 00:21:27.592 { 00:21:27.592 "name": "BaseBdev4", 00:21:27.592 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:27.592 "is_configured": true, 00:21:27.592 "data_offset": 2048, 00:21:27.592 "data_size": 63488 00:21:27.592 } 00:21:27.592 ] 00:21:27.592 }' 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:27.592 07:18:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:28.160 "name": "raid_bdev1", 00:21:28.160 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:28.160 "strip_size_kb": 64, 00:21:28.160 "state": "online", 00:21:28.160 "raid_level": "raid5f", 00:21:28.160 "superblock": true, 00:21:28.160 "num_base_bdevs": 4, 00:21:28.160 "num_base_bdevs_discovered": 3, 00:21:28.160 "num_base_bdevs_operational": 3, 00:21:28.160 "base_bdevs_list": [ 00:21:28.160 { 00:21:28.160 "name": null, 00:21:28.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.160 "is_configured": false, 00:21:28.160 "data_offset": 0, 00:21:28.160 "data_size": 63488 00:21:28.160 }, 00:21:28.160 { 00:21:28.160 "name": "BaseBdev2", 00:21:28.160 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:28.160 "is_configured": true, 00:21:28.160 "data_offset": 2048, 00:21:28.160 "data_size": 63488 00:21:28.160 }, 00:21:28.160 { 00:21:28.160 "name": "BaseBdev3", 00:21:28.160 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:28.160 "is_configured": true, 00:21:28.160 "data_offset": 2048, 00:21:28.160 "data_size": 63488 00:21:28.160 }, 00:21:28.160 { 00:21:28.160 "name": "BaseBdev4", 00:21:28.160 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:28.160 "is_configured": true, 00:21:28.160 "data_offset": 2048, 00:21:28.160 "data_size": 63488 00:21:28.160 } 00:21:28.160 ] 00:21:28.160 }' 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:28.160 [2024-11-20 07:18:25.438697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:28.160 [2024-11-20 07:18:25.438776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:28.160 [2024-11-20 07:18:25.438823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:21:28.160 [2024-11-20 07:18:25.438838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:28.160 [2024-11-20 07:18:25.439471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:28.160 [2024-11-20 07:18:25.439505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:28.160 [2024-11-20 07:18:25.439611] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:28.160 [2024-11-20 07:18:25.439633] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:28.160 [2024-11-20 07:18:25.439651] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:28.160 [2024-11-20 07:18:25.439668] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:28.160 BaseBdev1 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.160 07:18:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:29.536 "name": "raid_bdev1", 00:21:29.536 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:29.536 "strip_size_kb": 64, 00:21:29.536 "state": "online", 00:21:29.536 "raid_level": "raid5f", 00:21:29.536 "superblock": true, 00:21:29.536 "num_base_bdevs": 4, 00:21:29.536 "num_base_bdevs_discovered": 3, 00:21:29.536 "num_base_bdevs_operational": 3, 00:21:29.536 "base_bdevs_list": [ 00:21:29.536 { 00:21:29.536 "name": null, 00:21:29.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.536 "is_configured": false, 00:21:29.536 "data_offset": 0, 00:21:29.536 "data_size": 63488 00:21:29.536 }, 00:21:29.536 { 00:21:29.536 "name": "BaseBdev2", 00:21:29.536 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:29.536 "is_configured": true, 00:21:29.536 "data_offset": 2048, 00:21:29.536 "data_size": 63488 00:21:29.536 }, 00:21:29.536 { 00:21:29.536 "name": "BaseBdev3", 00:21:29.536 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:29.536 "is_configured": true, 00:21:29.536 "data_offset": 2048, 00:21:29.536 "data_size": 63488 00:21:29.536 }, 00:21:29.536 { 00:21:29.536 "name": "BaseBdev4", 00:21:29.536 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:29.536 "is_configured": true, 00:21:29.536 "data_offset": 2048, 00:21:29.536 "data_size": 63488 00:21:29.536 } 00:21:29.536 ] 00:21:29.536 }' 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:29.536 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.795 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:29.795 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:29.795 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:29.795 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:29.795 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:29.795 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:29.795 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.795 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.795 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.795 07:18:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.795 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:29.795 "name": "raid_bdev1", 00:21:29.795 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:29.795 "strip_size_kb": 64, 00:21:29.795 "state": "online", 00:21:29.795 "raid_level": "raid5f", 00:21:29.795 "superblock": true, 00:21:29.796 "num_base_bdevs": 4, 00:21:29.796 "num_base_bdevs_discovered": 3, 00:21:29.796 "num_base_bdevs_operational": 3, 00:21:29.796 "base_bdevs_list": [ 00:21:29.796 { 00:21:29.796 "name": null, 00:21:29.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.796 "is_configured": false, 00:21:29.796 "data_offset": 0, 00:21:29.796 "data_size": 63488 00:21:29.796 }, 00:21:29.796 { 00:21:29.796 "name": "BaseBdev2", 00:21:29.796 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:29.796 "is_configured": true, 00:21:29.796 "data_offset": 2048, 00:21:29.796 "data_size": 63488 00:21:29.796 }, 00:21:29.796 { 00:21:29.796 "name": "BaseBdev3", 00:21:29.796 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:29.796 "is_configured": true, 00:21:29.796 "data_offset": 2048, 00:21:29.796 "data_size": 63488 00:21:29.796 }, 00:21:29.796 { 00:21:29.796 "name": "BaseBdev4", 00:21:29.796 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:29.796 "is_configured": true, 00:21:29.796 "data_offset": 2048, 00:21:29.796 "data_size": 63488 00:21:29.796 } 00:21:29.796 ] 00:21:29.796 }' 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.796 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:29.796 [2024-11-20 07:18:27.111226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:29.796 [2024-11-20 07:18:27.111432] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:29.796 [2024-11-20 07:18:27.111457] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:30.056 request: 00:21:30.056 { 00:21:30.056 "base_bdev": "BaseBdev1", 00:21:30.056 "raid_bdev": "raid_bdev1", 00:21:30.056 "method": "bdev_raid_add_base_bdev", 00:21:30.056 "req_id": 1 00:21:30.056 } 00:21:30.056 Got JSON-RPC error response 00:21:30.056 response: 00:21:30.056 { 00:21:30.056 "code": -22, 00:21:30.056 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:30.056 } 00:21:30.056 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:30.056 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:21:30.056 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:30.056 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:30.056 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:30.056 07:18:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:30.993 "name": "raid_bdev1", 00:21:30.993 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:30.993 "strip_size_kb": 64, 00:21:30.993 "state": "online", 00:21:30.993 "raid_level": "raid5f", 00:21:30.993 "superblock": true, 00:21:30.993 "num_base_bdevs": 4, 00:21:30.993 "num_base_bdevs_discovered": 3, 00:21:30.993 "num_base_bdevs_operational": 3, 00:21:30.993 "base_bdevs_list": [ 00:21:30.993 { 00:21:30.993 "name": null, 00:21:30.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.993 "is_configured": false, 00:21:30.993 "data_offset": 0, 00:21:30.993 "data_size": 63488 00:21:30.993 }, 00:21:30.993 { 00:21:30.993 "name": "BaseBdev2", 00:21:30.993 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:30.993 "is_configured": true, 00:21:30.993 "data_offset": 2048, 00:21:30.993 "data_size": 63488 00:21:30.993 }, 00:21:30.993 { 00:21:30.993 "name": "BaseBdev3", 00:21:30.993 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:30.993 "is_configured": true, 00:21:30.993 "data_offset": 2048, 00:21:30.993 "data_size": 63488 00:21:30.993 }, 00:21:30.993 { 00:21:30.993 "name": "BaseBdev4", 00:21:30.993 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:30.993 "is_configured": true, 00:21:30.993 "data_offset": 2048, 00:21:30.993 "data_size": 63488 00:21:30.993 } 00:21:30.993 ] 00:21:30.993 }' 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:30.993 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:31.561 "name": "raid_bdev1", 00:21:31.561 "uuid": "11f3110e-91eb-4e7c-afb2-277324105f9d", 00:21:31.561 "strip_size_kb": 64, 00:21:31.561 "state": "online", 00:21:31.561 "raid_level": "raid5f", 00:21:31.561 "superblock": true, 00:21:31.561 "num_base_bdevs": 4, 00:21:31.561 "num_base_bdevs_discovered": 3, 00:21:31.561 "num_base_bdevs_operational": 3, 00:21:31.561 "base_bdevs_list": [ 00:21:31.561 { 00:21:31.561 "name": null, 00:21:31.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:31.561 "is_configured": false, 00:21:31.561 "data_offset": 0, 00:21:31.561 "data_size": 63488 00:21:31.561 }, 00:21:31.561 { 00:21:31.561 "name": "BaseBdev2", 00:21:31.561 "uuid": "985b6a4d-7031-5710-846d-8b75f25bdb17", 00:21:31.561 "is_configured": true, 00:21:31.561 "data_offset": 2048, 00:21:31.561 "data_size": 63488 00:21:31.561 }, 00:21:31.561 { 00:21:31.561 "name": "BaseBdev3", 00:21:31.561 "uuid": "0f05f9c1-7585-56da-af43-64aacd902baa", 00:21:31.561 "is_configured": true, 00:21:31.561 "data_offset": 2048, 00:21:31.561 "data_size": 63488 00:21:31.561 }, 00:21:31.561 { 00:21:31.561 "name": "BaseBdev4", 00:21:31.561 "uuid": "ebab5a4f-a124-565d-bb40-cf24992da6cd", 00:21:31.561 "is_configured": true, 00:21:31.561 "data_offset": 2048, 00:21:31.561 "data_size": 63488 00:21:31.561 } 00:21:31.561 ] 00:21:31.561 }' 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85448 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85448 ']' 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85448 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85448 00:21:31.561 killing process with pid 85448 00:21:31.561 Received shutdown signal, test time was about 60.000000 seconds 00:21:31.561 00:21:31.561 Latency(us) 00:21:31.561 [2024-11-20T07:18:28.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.561 [2024-11-20T07:18:28.881Z] =================================================================================================================== 00:21:31.561 [2024-11-20T07:18:28.881Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85448' 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85448 00:21:31.561 07:18:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85448 00:21:31.561 [2024-11-20 07:18:28.820564] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:31.561 [2024-11-20 07:18:28.820763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:31.561 [2024-11-20 07:18:28.820931] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:31.561 [2024-11-20 07:18:28.820963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:32.129 [2024-11-20 07:18:29.264144] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:33.112 07:18:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:21:33.112 00:21:33.112 real 0m28.747s 00:21:33.112 user 0m37.299s 00:21:33.112 sys 0m2.982s 00:21:33.112 07:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.112 07:18:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:33.112 ************************************ 00:21:33.112 END TEST raid5f_rebuild_test_sb 00:21:33.112 ************************************ 00:21:33.112 07:18:30 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:21:33.112 07:18:30 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:21:33.112 07:18:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:33.112 07:18:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.112 07:18:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:33.112 ************************************ 00:21:33.112 START TEST raid_state_function_test_sb_4k 00:21:33.112 ************************************ 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86270 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86270' 00:21:33.112 Process raid pid: 86270 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86270 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86270 ']' 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.112 07:18:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:33.371 [2024-11-20 07:18:30.467801] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:21:33.371 [2024-11-20 07:18:30.468351] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.371 [2024-11-20 07:18:30.666588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.629 [2024-11-20 07:18:30.816394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.888 [2024-11-20 07:18:31.027356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:33.888 [2024-11-20 07:18:31.027401] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:34.146 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.146 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:21:34.146 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:34.146 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.146 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.405 [2024-11-20 07:18:31.466442] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:34.405 [2024-11-20 07:18:31.466518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:34.405 [2024-11-20 07:18:31.466540] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:34.405 [2024-11-20 07:18:31.466562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.405 "name": "Existed_Raid", 00:21:34.405 "uuid": "aaa2f149-4b9e-4f2f-8e74-2b58c3447938", 00:21:34.405 "strip_size_kb": 0, 00:21:34.405 "state": "configuring", 00:21:34.405 "raid_level": "raid1", 00:21:34.405 "superblock": true, 00:21:34.405 "num_base_bdevs": 2, 00:21:34.405 "num_base_bdevs_discovered": 0, 00:21:34.405 "num_base_bdevs_operational": 2, 00:21:34.405 "base_bdevs_list": [ 00:21:34.405 { 00:21:34.405 "name": "BaseBdev1", 00:21:34.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.405 "is_configured": false, 00:21:34.405 "data_offset": 0, 00:21:34.405 "data_size": 0 00:21:34.405 }, 00:21:34.405 { 00:21:34.405 "name": "BaseBdev2", 00:21:34.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.405 "is_configured": false, 00:21:34.405 "data_offset": 0, 00:21:34.405 "data_size": 0 00:21:34.405 } 00:21:34.405 ] 00:21:34.405 }' 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.405 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.664 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:34.664 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.664 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.664 [2024-11-20 07:18:31.958514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:34.664 [2024-11-20 07:18:31.958563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:21:34.664 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.664 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:34.664 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.664 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.664 [2024-11-20 07:18:31.970483] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:34.664 [2024-11-20 07:18:31.970692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:34.664 [2024-11-20 07:18:31.970837] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:34.664 [2024-11-20 07:18:31.971006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:34.664 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.664 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:21:34.664 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.664 07:18:31 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.922 [2024-11-20 07:18:32.015798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:34.922 BaseBdev1 00:21:34.922 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.922 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:21:34.922 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.923 [ 00:21:34.923 { 00:21:34.923 "name": "BaseBdev1", 00:21:34.923 "aliases": [ 00:21:34.923 "dbba9ff9-66ad-45d4-8116-3be9ff3e1c02" 00:21:34.923 ], 00:21:34.923 "product_name": "Malloc disk", 00:21:34.923 "block_size": 4096, 00:21:34.923 "num_blocks": 8192, 00:21:34.923 "uuid": "dbba9ff9-66ad-45d4-8116-3be9ff3e1c02", 00:21:34.923 "assigned_rate_limits": { 00:21:34.923 "rw_ios_per_sec": 0, 00:21:34.923 "rw_mbytes_per_sec": 0, 00:21:34.923 "r_mbytes_per_sec": 0, 00:21:34.923 "w_mbytes_per_sec": 0 00:21:34.923 }, 00:21:34.923 "claimed": true, 00:21:34.923 "claim_type": "exclusive_write", 00:21:34.923 "zoned": false, 00:21:34.923 "supported_io_types": { 00:21:34.923 "read": true, 00:21:34.923 "write": true, 00:21:34.923 "unmap": true, 00:21:34.923 "flush": true, 00:21:34.923 "reset": true, 00:21:34.923 "nvme_admin": false, 00:21:34.923 "nvme_io": false, 00:21:34.923 "nvme_io_md": false, 00:21:34.923 "write_zeroes": true, 00:21:34.923 "zcopy": true, 00:21:34.923 "get_zone_info": false, 00:21:34.923 "zone_management": false, 00:21:34.923 "zone_append": false, 00:21:34.923 "compare": false, 00:21:34.923 "compare_and_write": false, 00:21:34.923 "abort": true, 00:21:34.923 "seek_hole": false, 00:21:34.923 "seek_data": false, 00:21:34.923 "copy": true, 00:21:34.923 "nvme_iov_md": false 00:21:34.923 }, 00:21:34.923 "memory_domains": [ 00:21:34.923 { 00:21:34.923 "dma_device_id": "system", 00:21:34.923 "dma_device_type": 1 00:21:34.923 }, 00:21:34.923 { 00:21:34.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.923 "dma_device_type": 2 00:21:34.923 } 00:21:34.923 ], 00:21:34.923 "driver_specific": {} 00:21:34.923 } 00:21:34.923 ] 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:34.923 "name": "Existed_Raid", 00:21:34.923 "uuid": "e73f0091-8284-455b-aeaf-936fcc77905d", 00:21:34.923 "strip_size_kb": 0, 00:21:34.923 "state": "configuring", 00:21:34.923 "raid_level": "raid1", 00:21:34.923 "superblock": true, 00:21:34.923 "num_base_bdevs": 2, 00:21:34.923 "num_base_bdevs_discovered": 1, 00:21:34.923 "num_base_bdevs_operational": 2, 00:21:34.923 "base_bdevs_list": [ 00:21:34.923 { 00:21:34.923 "name": "BaseBdev1", 00:21:34.923 "uuid": "dbba9ff9-66ad-45d4-8116-3be9ff3e1c02", 00:21:34.923 "is_configured": true, 00:21:34.923 "data_offset": 256, 00:21:34.923 "data_size": 7936 00:21:34.923 }, 00:21:34.923 { 00:21:34.923 "name": "BaseBdev2", 00:21:34.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.923 "is_configured": false, 00:21:34.923 "data_offset": 0, 00:21:34.923 "data_size": 0 00:21:34.923 } 00:21:34.923 ] 00:21:34.923 }' 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:34.923 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.491 [2024-11-20 07:18:32.588002] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:35.491 [2024-11-20 07:18:32.588069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.491 [2024-11-20 07:18:32.596072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:35.491 [2024-11-20 07:18:32.598739] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:35.491 [2024-11-20 07:18:32.598936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:35.491 "name": "Existed_Raid", 00:21:35.491 "uuid": "43951f97-6897-4afb-84e1-b89c6d08a586", 00:21:35.491 "strip_size_kb": 0, 00:21:35.491 "state": "configuring", 00:21:35.491 "raid_level": "raid1", 00:21:35.491 "superblock": true, 00:21:35.491 "num_base_bdevs": 2, 00:21:35.491 "num_base_bdevs_discovered": 1, 00:21:35.491 "num_base_bdevs_operational": 2, 00:21:35.491 "base_bdevs_list": [ 00:21:35.491 { 00:21:35.491 "name": "BaseBdev1", 00:21:35.491 "uuid": "dbba9ff9-66ad-45d4-8116-3be9ff3e1c02", 00:21:35.491 "is_configured": true, 00:21:35.491 "data_offset": 256, 00:21:35.491 "data_size": 7936 00:21:35.491 }, 00:21:35.491 { 00:21:35.491 "name": "BaseBdev2", 00:21:35.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.491 "is_configured": false, 00:21:35.491 "data_offset": 0, 00:21:35.491 "data_size": 0 00:21:35.491 } 00:21:35.491 ] 00:21:35.491 }' 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:35.491 07:18:32 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.058 [2024-11-20 07:18:33.159318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:36.058 [2024-11-20 07:18:33.159656] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:36.058 [2024-11-20 07:18:33.159678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:36.058 BaseBdev2 00:21:36.058 [2024-11-20 07:18:33.160082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:36.058 [2024-11-20 07:18:33.160404] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:36.058 [2024-11-20 07:18:33.160440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:21:36.058 [2024-11-20 07:18:33.160647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.058 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.058 [ 00:21:36.058 { 00:21:36.058 "name": "BaseBdev2", 00:21:36.058 "aliases": [ 00:21:36.058 "b474ad46-9bc9-43de-962e-bc9030a70584" 00:21:36.058 ], 00:21:36.058 "product_name": "Malloc disk", 00:21:36.058 "block_size": 4096, 00:21:36.058 "num_blocks": 8192, 00:21:36.058 "uuid": "b474ad46-9bc9-43de-962e-bc9030a70584", 00:21:36.058 "assigned_rate_limits": { 00:21:36.058 "rw_ios_per_sec": 0, 00:21:36.058 "rw_mbytes_per_sec": 0, 00:21:36.058 "r_mbytes_per_sec": 0, 00:21:36.058 "w_mbytes_per_sec": 0 00:21:36.058 }, 00:21:36.058 "claimed": true, 00:21:36.058 "claim_type": "exclusive_write", 00:21:36.058 "zoned": false, 00:21:36.058 "supported_io_types": { 00:21:36.058 "read": true, 00:21:36.058 "write": true, 00:21:36.058 "unmap": true, 00:21:36.058 "flush": true, 00:21:36.058 "reset": true, 00:21:36.058 "nvme_admin": false, 00:21:36.058 "nvme_io": false, 00:21:36.058 "nvme_io_md": false, 00:21:36.058 "write_zeroes": true, 00:21:36.058 "zcopy": true, 00:21:36.058 "get_zone_info": false, 00:21:36.058 "zone_management": false, 00:21:36.058 "zone_append": false, 00:21:36.058 "compare": false, 00:21:36.058 "compare_and_write": false, 00:21:36.058 "abort": true, 00:21:36.058 "seek_hole": false, 00:21:36.058 "seek_data": false, 00:21:36.058 "copy": true, 00:21:36.058 "nvme_iov_md": false 00:21:36.058 }, 00:21:36.058 "memory_domains": [ 00:21:36.058 { 00:21:36.058 "dma_device_id": "system", 00:21:36.058 "dma_device_type": 1 00:21:36.058 }, 00:21:36.058 { 00:21:36.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.058 "dma_device_type": 2 00:21:36.058 } 00:21:36.058 ], 00:21:36.058 "driver_specific": {} 00:21:36.058 } 00:21:36.058 ] 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.059 "name": "Existed_Raid", 00:21:36.059 "uuid": "43951f97-6897-4afb-84e1-b89c6d08a586", 00:21:36.059 "strip_size_kb": 0, 00:21:36.059 "state": "online", 00:21:36.059 "raid_level": "raid1", 00:21:36.059 "superblock": true, 00:21:36.059 "num_base_bdevs": 2, 00:21:36.059 "num_base_bdevs_discovered": 2, 00:21:36.059 "num_base_bdevs_operational": 2, 00:21:36.059 "base_bdevs_list": [ 00:21:36.059 { 00:21:36.059 "name": "BaseBdev1", 00:21:36.059 "uuid": "dbba9ff9-66ad-45d4-8116-3be9ff3e1c02", 00:21:36.059 "is_configured": true, 00:21:36.059 "data_offset": 256, 00:21:36.059 "data_size": 7936 00:21:36.059 }, 00:21:36.059 { 00:21:36.059 "name": "BaseBdev2", 00:21:36.059 "uuid": "b474ad46-9bc9-43de-962e-bc9030a70584", 00:21:36.059 "is_configured": true, 00:21:36.059 "data_offset": 256, 00:21:36.059 "data_size": 7936 00:21:36.059 } 00:21:36.059 ] 00:21:36.059 }' 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.059 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.626 [2024-11-20 07:18:33.719903] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:36.626 "name": "Existed_Raid", 00:21:36.626 "aliases": [ 00:21:36.626 "43951f97-6897-4afb-84e1-b89c6d08a586" 00:21:36.626 ], 00:21:36.626 "product_name": "Raid Volume", 00:21:36.626 "block_size": 4096, 00:21:36.626 "num_blocks": 7936, 00:21:36.626 "uuid": "43951f97-6897-4afb-84e1-b89c6d08a586", 00:21:36.626 "assigned_rate_limits": { 00:21:36.626 "rw_ios_per_sec": 0, 00:21:36.626 "rw_mbytes_per_sec": 0, 00:21:36.626 "r_mbytes_per_sec": 0, 00:21:36.626 "w_mbytes_per_sec": 0 00:21:36.626 }, 00:21:36.626 "claimed": false, 00:21:36.626 "zoned": false, 00:21:36.626 "supported_io_types": { 00:21:36.626 "read": true, 00:21:36.626 "write": true, 00:21:36.626 "unmap": false, 00:21:36.626 "flush": false, 00:21:36.626 "reset": true, 00:21:36.626 "nvme_admin": false, 00:21:36.626 "nvme_io": false, 00:21:36.626 "nvme_io_md": false, 00:21:36.626 "write_zeroes": true, 00:21:36.626 "zcopy": false, 00:21:36.626 "get_zone_info": false, 00:21:36.626 "zone_management": false, 00:21:36.626 "zone_append": false, 00:21:36.626 "compare": false, 00:21:36.626 "compare_and_write": false, 00:21:36.626 "abort": false, 00:21:36.626 "seek_hole": false, 00:21:36.626 "seek_data": false, 00:21:36.626 "copy": false, 00:21:36.626 "nvme_iov_md": false 00:21:36.626 }, 00:21:36.626 "memory_domains": [ 00:21:36.626 { 00:21:36.626 "dma_device_id": "system", 00:21:36.626 "dma_device_type": 1 00:21:36.626 }, 00:21:36.626 { 00:21:36.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.626 "dma_device_type": 2 00:21:36.626 }, 00:21:36.626 { 00:21:36.626 "dma_device_id": "system", 00:21:36.626 "dma_device_type": 1 00:21:36.626 }, 00:21:36.626 { 00:21:36.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:36.626 "dma_device_type": 2 00:21:36.626 } 00:21:36.626 ], 00:21:36.626 "driver_specific": { 00:21:36.626 "raid": { 00:21:36.626 "uuid": "43951f97-6897-4afb-84e1-b89c6d08a586", 00:21:36.626 "strip_size_kb": 0, 00:21:36.626 "state": "online", 00:21:36.626 "raid_level": "raid1", 00:21:36.626 "superblock": true, 00:21:36.626 "num_base_bdevs": 2, 00:21:36.626 "num_base_bdevs_discovered": 2, 00:21:36.626 "num_base_bdevs_operational": 2, 00:21:36.626 "base_bdevs_list": [ 00:21:36.626 { 00:21:36.626 "name": "BaseBdev1", 00:21:36.626 "uuid": "dbba9ff9-66ad-45d4-8116-3be9ff3e1c02", 00:21:36.626 "is_configured": true, 00:21:36.626 "data_offset": 256, 00:21:36.626 "data_size": 7936 00:21:36.626 }, 00:21:36.626 { 00:21:36.626 "name": "BaseBdev2", 00:21:36.626 "uuid": "b474ad46-9bc9-43de-962e-bc9030a70584", 00:21:36.626 "is_configured": true, 00:21:36.626 "data_offset": 256, 00:21:36.626 "data_size": 7936 00:21:36.626 } 00:21:36.626 ] 00:21:36.626 } 00:21:36.626 } 00:21:36.626 }' 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:21:36.626 BaseBdev2' 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:36.626 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:36.627 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.627 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:21:36.627 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.627 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.627 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.627 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:36.627 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:36.627 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:36.627 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:21:36.627 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:36.627 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.627 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.627 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.886 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:36.886 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:36.886 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:21:36.886 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.886 07:18:33 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.886 [2024-11-20 07:18:33.975657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:36.886 "name": "Existed_Raid", 00:21:36.886 "uuid": "43951f97-6897-4afb-84e1-b89c6d08a586", 00:21:36.886 "strip_size_kb": 0, 00:21:36.886 "state": "online", 00:21:36.886 "raid_level": "raid1", 00:21:36.886 "superblock": true, 00:21:36.886 "num_base_bdevs": 2, 00:21:36.886 "num_base_bdevs_discovered": 1, 00:21:36.886 "num_base_bdevs_operational": 1, 00:21:36.886 "base_bdevs_list": [ 00:21:36.886 { 00:21:36.886 "name": null, 00:21:36.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:36.886 "is_configured": false, 00:21:36.886 "data_offset": 0, 00:21:36.886 "data_size": 7936 00:21:36.886 }, 00:21:36.886 { 00:21:36.886 "name": "BaseBdev2", 00:21:36.886 "uuid": "b474ad46-9bc9-43de-962e-bc9030a70584", 00:21:36.886 "is_configured": true, 00:21:36.886 "data_offset": 256, 00:21:36.886 "data_size": 7936 00:21:36.886 } 00:21:36.886 ] 00:21:36.886 }' 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:36.886 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.472 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.473 [2024-11-20 07:18:34.647432] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:37.473 [2024-11-20 07:18:34.647746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:37.473 [2024-11-20 07:18:34.735645] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:37.473 [2024-11-20 07:18:34.735911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:37.473 [2024-11-20 07:18:34.736108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86270 00:21:37.473 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86270 ']' 00:21:37.731 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86270 00:21:37.732 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:21:37.732 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.732 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86270 00:21:37.732 killing process with pid 86270 00:21:37.732 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.732 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.732 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86270' 00:21:37.732 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86270 00:21:37.732 [2024-11-20 07:18:34.828641] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:37.732 07:18:34 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86270 00:21:37.732 [2024-11-20 07:18:34.843561] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:38.668 07:18:35 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:21:38.668 ************************************ 00:21:38.668 END TEST raid_state_function_test_sb_4k 00:21:38.668 ************************************ 00:21:38.668 00:21:38.668 real 0m5.528s 00:21:38.668 user 0m8.347s 00:21:38.668 sys 0m0.816s 00:21:38.668 07:18:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.668 07:18:35 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.668 07:18:35 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:21:38.668 07:18:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:38.668 07:18:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.668 07:18:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:38.668 ************************************ 00:21:38.668 START TEST raid_superblock_test_4k 00:21:38.668 ************************************ 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86527 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86527 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:21:38.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86527 ']' 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.668 07:18:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:38.926 [2024-11-20 07:18:36.043546] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:21:38.927 [2024-11-20 07:18:36.043730] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86527 ] 00:21:38.927 [2024-11-20 07:18:36.234076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.185 [2024-11-20 07:18:36.388255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.444 [2024-11-20 07:18:36.593884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:39.444 [2024-11-20 07:18:36.593960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.703 malloc1 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.703 07:18:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.703 [2024-11-20 07:18:37.001477] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:39.703 [2024-11-20 07:18:37.001565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.703 [2024-11-20 07:18:37.001602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:39.703 [2024-11-20 07:18:37.001618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.703 [2024-11-20 07:18:37.004518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.703 [2024-11-20 07:18:37.004565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:39.703 pt1 00:21:39.703 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.703 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:39.703 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:39.703 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:21:39.703 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:21:39.703 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:21:39.703 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:39.703 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:21:39.703 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:39.703 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:21:39.703 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.703 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.962 malloc2 00:21:39.962 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.962 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:39.962 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.962 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.963 [2024-11-20 07:18:37.057320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:39.963 [2024-11-20 07:18:37.057391] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.963 [2024-11-20 07:18:37.057424] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:39.963 [2024-11-20 07:18:37.057448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.963 [2024-11-20 07:18:37.060221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.963 [2024-11-20 07:18:37.060267] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:39.963 pt2 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.963 [2024-11-20 07:18:37.065391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:39.963 [2024-11-20 07:18:37.067806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:39.963 [2024-11-20 07:18:37.068231] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:39.963 [2024-11-20 07:18:37.068262] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:39.963 [2024-11-20 07:18:37.068575] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:39.963 [2024-11-20 07:18:37.068777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:39.963 [2024-11-20 07:18:37.068802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:39.963 [2024-11-20 07:18:37.069004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:39.963 "name": "raid_bdev1", 00:21:39.963 "uuid": "96a04c3b-3cbd-495e-bece-fea77e9eb82e", 00:21:39.963 "strip_size_kb": 0, 00:21:39.963 "state": "online", 00:21:39.963 "raid_level": "raid1", 00:21:39.963 "superblock": true, 00:21:39.963 "num_base_bdevs": 2, 00:21:39.963 "num_base_bdevs_discovered": 2, 00:21:39.963 "num_base_bdevs_operational": 2, 00:21:39.963 "base_bdevs_list": [ 00:21:39.963 { 00:21:39.963 "name": "pt1", 00:21:39.963 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:39.963 "is_configured": true, 00:21:39.963 "data_offset": 256, 00:21:39.963 "data_size": 7936 00:21:39.963 }, 00:21:39.963 { 00:21:39.963 "name": "pt2", 00:21:39.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:39.963 "is_configured": true, 00:21:39.963 "data_offset": 256, 00:21:39.963 "data_size": 7936 00:21:39.963 } 00:21:39.963 ] 00:21:39.963 }' 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:39.963 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.532 [2024-11-20 07:18:37.597848] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:40.532 "name": "raid_bdev1", 00:21:40.532 "aliases": [ 00:21:40.532 "96a04c3b-3cbd-495e-bece-fea77e9eb82e" 00:21:40.532 ], 00:21:40.532 "product_name": "Raid Volume", 00:21:40.532 "block_size": 4096, 00:21:40.532 "num_blocks": 7936, 00:21:40.532 "uuid": "96a04c3b-3cbd-495e-bece-fea77e9eb82e", 00:21:40.532 "assigned_rate_limits": { 00:21:40.532 "rw_ios_per_sec": 0, 00:21:40.532 "rw_mbytes_per_sec": 0, 00:21:40.532 "r_mbytes_per_sec": 0, 00:21:40.532 "w_mbytes_per_sec": 0 00:21:40.532 }, 00:21:40.532 "claimed": false, 00:21:40.532 "zoned": false, 00:21:40.532 "supported_io_types": { 00:21:40.532 "read": true, 00:21:40.532 "write": true, 00:21:40.532 "unmap": false, 00:21:40.532 "flush": false, 00:21:40.532 "reset": true, 00:21:40.532 "nvme_admin": false, 00:21:40.532 "nvme_io": false, 00:21:40.532 "nvme_io_md": false, 00:21:40.532 "write_zeroes": true, 00:21:40.532 "zcopy": false, 00:21:40.532 "get_zone_info": false, 00:21:40.532 "zone_management": false, 00:21:40.532 "zone_append": false, 00:21:40.532 "compare": false, 00:21:40.532 "compare_and_write": false, 00:21:40.532 "abort": false, 00:21:40.532 "seek_hole": false, 00:21:40.532 "seek_data": false, 00:21:40.532 "copy": false, 00:21:40.532 "nvme_iov_md": false 00:21:40.532 }, 00:21:40.532 "memory_domains": [ 00:21:40.532 { 00:21:40.532 "dma_device_id": "system", 00:21:40.532 "dma_device_type": 1 00:21:40.532 }, 00:21:40.532 { 00:21:40.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.532 "dma_device_type": 2 00:21:40.532 }, 00:21:40.532 { 00:21:40.532 "dma_device_id": "system", 00:21:40.532 "dma_device_type": 1 00:21:40.532 }, 00:21:40.532 { 00:21:40.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:40.532 "dma_device_type": 2 00:21:40.532 } 00:21:40.532 ], 00:21:40.532 "driver_specific": { 00:21:40.532 "raid": { 00:21:40.532 "uuid": "96a04c3b-3cbd-495e-bece-fea77e9eb82e", 00:21:40.532 "strip_size_kb": 0, 00:21:40.532 "state": "online", 00:21:40.532 "raid_level": "raid1", 00:21:40.532 "superblock": true, 00:21:40.532 "num_base_bdevs": 2, 00:21:40.532 "num_base_bdevs_discovered": 2, 00:21:40.532 "num_base_bdevs_operational": 2, 00:21:40.532 "base_bdevs_list": [ 00:21:40.532 { 00:21:40.532 "name": "pt1", 00:21:40.532 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:40.532 "is_configured": true, 00:21:40.532 "data_offset": 256, 00:21:40.532 "data_size": 7936 00:21:40.532 }, 00:21:40.532 { 00:21:40.532 "name": "pt2", 00:21:40.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:40.532 "is_configured": true, 00:21:40.532 "data_offset": 256, 00:21:40.532 "data_size": 7936 00:21:40.532 } 00:21:40.532 ] 00:21:40.532 } 00:21:40.532 } 00:21:40.532 }' 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:40.532 pt2' 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.532 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.532 [2024-11-20 07:18:37.838041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:40.792 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.792 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=96a04c3b-3cbd-495e-bece-fea77e9eb82e 00:21:40.792 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 96a04c3b-3cbd-495e-bece-fea77e9eb82e ']' 00:21:40.792 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:40.792 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.792 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.792 [2024-11-20 07:18:37.885620] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:40.792 [2024-11-20 07:18:37.885670] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:40.792 [2024-11-20 07:18:37.885834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.792 [2024-11-20 07:18:37.885999] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.792 [2024-11-20 07:18:37.886041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:40.792 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.792 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.792 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.792 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:21:40.792 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.792 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.792 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:40.793 07:18:37 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.793 [2024-11-20 07:18:38.013637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:40.793 [2024-11-20 07:18:38.016146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:40.793 [2024-11-20 07:18:38.016269] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:40.793 [2024-11-20 07:18:38.016355] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:40.793 [2024-11-20 07:18:38.016383] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:40.793 [2024-11-20 07:18:38.016400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:21:40.793 request: 00:21:40.793 { 00:21:40.793 "name": "raid_bdev1", 00:21:40.793 "raid_level": "raid1", 00:21:40.793 "base_bdevs": [ 00:21:40.793 "malloc1", 00:21:40.793 "malloc2" 00:21:40.793 ], 00:21:40.793 "superblock": false, 00:21:40.793 "method": "bdev_raid_create", 00:21:40.793 "req_id": 1 00:21:40.793 } 00:21:40.793 Got JSON-RPC error response 00:21:40.793 response: 00:21:40.793 { 00:21:40.793 "code": -17, 00:21:40.793 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:40.793 } 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.793 [2024-11-20 07:18:38.069620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:40.793 [2024-11-20 07:18:38.069700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:40.793 [2024-11-20 07:18:38.069730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:40.793 [2024-11-20 07:18:38.069748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:40.793 [2024-11-20 07:18:38.072708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:40.793 [2024-11-20 07:18:38.072762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:40.793 [2024-11-20 07:18:38.072899] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:40.793 [2024-11-20 07:18:38.072987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:40.793 pt1 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.793 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.052 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.052 "name": "raid_bdev1", 00:21:41.052 "uuid": "96a04c3b-3cbd-495e-bece-fea77e9eb82e", 00:21:41.052 "strip_size_kb": 0, 00:21:41.052 "state": "configuring", 00:21:41.052 "raid_level": "raid1", 00:21:41.052 "superblock": true, 00:21:41.052 "num_base_bdevs": 2, 00:21:41.052 "num_base_bdevs_discovered": 1, 00:21:41.052 "num_base_bdevs_operational": 2, 00:21:41.052 "base_bdevs_list": [ 00:21:41.052 { 00:21:41.052 "name": "pt1", 00:21:41.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:41.052 "is_configured": true, 00:21:41.052 "data_offset": 256, 00:21:41.052 "data_size": 7936 00:21:41.052 }, 00:21:41.052 { 00:21:41.052 "name": null, 00:21:41.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.052 "is_configured": false, 00:21:41.052 "data_offset": 256, 00:21:41.052 "data_size": 7936 00:21:41.052 } 00:21:41.052 ] 00:21:41.052 }' 00:21:41.052 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.052 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.323 [2024-11-20 07:18:38.577788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:41.323 [2024-11-20 07:18:38.577898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:41.323 [2024-11-20 07:18:38.577935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:41.323 [2024-11-20 07:18:38.577953] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:41.323 [2024-11-20 07:18:38.578551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:41.323 [2024-11-20 07:18:38.578601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:41.323 [2024-11-20 07:18:38.578708] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:41.323 [2024-11-20 07:18:38.578751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:41.323 [2024-11-20 07:18:38.578924] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:21:41.323 [2024-11-20 07:18:38.578947] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:41.323 [2024-11-20 07:18:38.579252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:41.323 [2024-11-20 07:18:38.579453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:21:41.323 [2024-11-20 07:18:38.579615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:21:41.323 [2024-11-20 07:18:38.579829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:41.323 pt2 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.323 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:41.324 "name": "raid_bdev1", 00:21:41.324 "uuid": "96a04c3b-3cbd-495e-bece-fea77e9eb82e", 00:21:41.324 "strip_size_kb": 0, 00:21:41.324 "state": "online", 00:21:41.324 "raid_level": "raid1", 00:21:41.324 "superblock": true, 00:21:41.324 "num_base_bdevs": 2, 00:21:41.324 "num_base_bdevs_discovered": 2, 00:21:41.324 "num_base_bdevs_operational": 2, 00:21:41.324 "base_bdevs_list": [ 00:21:41.324 { 00:21:41.324 "name": "pt1", 00:21:41.324 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:41.324 "is_configured": true, 00:21:41.324 "data_offset": 256, 00:21:41.324 "data_size": 7936 00:21:41.324 }, 00:21:41.324 { 00:21:41.324 "name": "pt2", 00:21:41.324 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.324 "is_configured": true, 00:21:41.324 "data_offset": 256, 00:21:41.324 "data_size": 7936 00:21:41.324 } 00:21:41.324 ] 00:21:41.324 }' 00:21:41.324 07:18:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:41.324 07:18:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:41.899 [2024-11-20 07:18:39.070207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:21:41.899 "name": "raid_bdev1", 00:21:41.899 "aliases": [ 00:21:41.899 "96a04c3b-3cbd-495e-bece-fea77e9eb82e" 00:21:41.899 ], 00:21:41.899 "product_name": "Raid Volume", 00:21:41.899 "block_size": 4096, 00:21:41.899 "num_blocks": 7936, 00:21:41.899 "uuid": "96a04c3b-3cbd-495e-bece-fea77e9eb82e", 00:21:41.899 "assigned_rate_limits": { 00:21:41.899 "rw_ios_per_sec": 0, 00:21:41.899 "rw_mbytes_per_sec": 0, 00:21:41.899 "r_mbytes_per_sec": 0, 00:21:41.899 "w_mbytes_per_sec": 0 00:21:41.899 }, 00:21:41.899 "claimed": false, 00:21:41.899 "zoned": false, 00:21:41.899 "supported_io_types": { 00:21:41.899 "read": true, 00:21:41.899 "write": true, 00:21:41.899 "unmap": false, 00:21:41.899 "flush": false, 00:21:41.899 "reset": true, 00:21:41.899 "nvme_admin": false, 00:21:41.899 "nvme_io": false, 00:21:41.899 "nvme_io_md": false, 00:21:41.899 "write_zeroes": true, 00:21:41.899 "zcopy": false, 00:21:41.899 "get_zone_info": false, 00:21:41.899 "zone_management": false, 00:21:41.899 "zone_append": false, 00:21:41.899 "compare": false, 00:21:41.899 "compare_and_write": false, 00:21:41.899 "abort": false, 00:21:41.899 "seek_hole": false, 00:21:41.899 "seek_data": false, 00:21:41.899 "copy": false, 00:21:41.899 "nvme_iov_md": false 00:21:41.899 }, 00:21:41.899 "memory_domains": [ 00:21:41.899 { 00:21:41.899 "dma_device_id": "system", 00:21:41.899 "dma_device_type": 1 00:21:41.899 }, 00:21:41.899 { 00:21:41.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.899 "dma_device_type": 2 00:21:41.899 }, 00:21:41.899 { 00:21:41.899 "dma_device_id": "system", 00:21:41.899 "dma_device_type": 1 00:21:41.899 }, 00:21:41.899 { 00:21:41.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:41.899 "dma_device_type": 2 00:21:41.899 } 00:21:41.899 ], 00:21:41.899 "driver_specific": { 00:21:41.899 "raid": { 00:21:41.899 "uuid": "96a04c3b-3cbd-495e-bece-fea77e9eb82e", 00:21:41.899 "strip_size_kb": 0, 00:21:41.899 "state": "online", 00:21:41.899 "raid_level": "raid1", 00:21:41.899 "superblock": true, 00:21:41.899 "num_base_bdevs": 2, 00:21:41.899 "num_base_bdevs_discovered": 2, 00:21:41.899 "num_base_bdevs_operational": 2, 00:21:41.899 "base_bdevs_list": [ 00:21:41.899 { 00:21:41.899 "name": "pt1", 00:21:41.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:41.899 "is_configured": true, 00:21:41.899 "data_offset": 256, 00:21:41.899 "data_size": 7936 00:21:41.899 }, 00:21:41.899 { 00:21:41.899 "name": "pt2", 00:21:41.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:41.899 "is_configured": true, 00:21:41.899 "data_offset": 256, 00:21:41.899 "data_size": 7936 00:21:41.899 } 00:21:41.899 ] 00:21:41.899 } 00:21:41.899 } 00:21:41.899 }' 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:21:41.899 pt2' 00:21:41.899 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:42.157 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:21:42.157 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:42.157 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:21:42.157 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:21:42.158 [2024-11-20 07:18:39.338288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 96a04c3b-3cbd-495e-bece-fea77e9eb82e '!=' 96a04c3b-3cbd-495e-bece-fea77e9eb82e ']' 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.158 [2024-11-20 07:18:39.402044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.158 "name": "raid_bdev1", 00:21:42.158 "uuid": "96a04c3b-3cbd-495e-bece-fea77e9eb82e", 00:21:42.158 "strip_size_kb": 0, 00:21:42.158 "state": "online", 00:21:42.158 "raid_level": "raid1", 00:21:42.158 "superblock": true, 00:21:42.158 "num_base_bdevs": 2, 00:21:42.158 "num_base_bdevs_discovered": 1, 00:21:42.158 "num_base_bdevs_operational": 1, 00:21:42.158 "base_bdevs_list": [ 00:21:42.158 { 00:21:42.158 "name": null, 00:21:42.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.158 "is_configured": false, 00:21:42.158 "data_offset": 0, 00:21:42.158 "data_size": 7936 00:21:42.158 }, 00:21:42.158 { 00:21:42.158 "name": "pt2", 00:21:42.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.158 "is_configured": true, 00:21:42.158 "data_offset": 256, 00:21:42.158 "data_size": 7936 00:21:42.158 } 00:21:42.158 ] 00:21:42.158 }' 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.158 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.725 [2024-11-20 07:18:39.862142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:42.725 [2024-11-20 07:18:39.862316] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:42.725 [2024-11-20 07:18:39.862530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:42.725 [2024-11-20 07:18:39.862710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:42.725 [2024-11-20 07:18:39.862745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.725 [2024-11-20 07:18:39.922126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:42.725 [2024-11-20 07:18:39.922207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:42.725 [2024-11-20 07:18:39.922237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:21:42.725 [2024-11-20 07:18:39.922254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:42.725 [2024-11-20 07:18:39.925173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:42.725 [2024-11-20 07:18:39.925224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:42.725 [2024-11-20 07:18:39.925326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:42.725 [2024-11-20 07:18:39.925394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:42.725 [2024-11-20 07:18:39.925527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:21:42.725 [2024-11-20 07:18:39.925549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:42.725 [2024-11-20 07:18:39.925833] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:42.725 [2024-11-20 07:18:39.926046] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:21:42.725 [2024-11-20 07:18:39.926063] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:21:42.725 [2024-11-20 07:18:39.926291] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.725 pt2 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:42.725 "name": "raid_bdev1", 00:21:42.725 "uuid": "96a04c3b-3cbd-495e-bece-fea77e9eb82e", 00:21:42.725 "strip_size_kb": 0, 00:21:42.725 "state": "online", 00:21:42.725 "raid_level": "raid1", 00:21:42.725 "superblock": true, 00:21:42.725 "num_base_bdevs": 2, 00:21:42.725 "num_base_bdevs_discovered": 1, 00:21:42.725 "num_base_bdevs_operational": 1, 00:21:42.725 "base_bdevs_list": [ 00:21:42.725 { 00:21:42.725 "name": null, 00:21:42.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.725 "is_configured": false, 00:21:42.725 "data_offset": 256, 00:21:42.725 "data_size": 7936 00:21:42.725 }, 00:21:42.725 { 00:21:42.725 "name": "pt2", 00:21:42.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:42.725 "is_configured": true, 00:21:42.725 "data_offset": 256, 00:21:42.725 "data_size": 7936 00:21:42.725 } 00:21:42.725 ] 00:21:42.725 }' 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:42.725 07:18:39 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.294 [2024-11-20 07:18:40.438345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.294 [2024-11-20 07:18:40.438535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:43.294 [2024-11-20 07:18:40.438658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.294 [2024-11-20 07:18:40.438732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.294 [2024-11-20 07:18:40.438748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.294 [2024-11-20 07:18:40.506425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:43.294 [2024-11-20 07:18:40.506641] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:43.294 [2024-11-20 07:18:40.506793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:21:43.294 [2024-11-20 07:18:40.506933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:43.294 [2024-11-20 07:18:40.510039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:43.294 [2024-11-20 07:18:40.510208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:43.294 [2024-11-20 07:18:40.510435] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:43.294 [2024-11-20 07:18:40.510608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:43.294 pt1 00:21:43.294 [2024-11-20 07:18:40.510997] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:43.294 [2024-11-20 07:18:40.511022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:43.294 [2024-11-20 07:18:40.511048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:21:43.294 [2024-11-20 07:18:40.511135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:43.294 [2024-11-20 07:18:40.511252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:21:43.294 [2024-11-20 07:18:40.511268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:43.294 [2024-11-20 07:18:40.511601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:43.294 [2024-11-20 07:18:40.511793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:21:43.294 [2024-11-20 07:18:40.511818] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.294 [2024-11-20 07:18:40.512038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:43.294 "name": "raid_bdev1", 00:21:43.294 "uuid": "96a04c3b-3cbd-495e-bece-fea77e9eb82e", 00:21:43.294 "strip_size_kb": 0, 00:21:43.294 "state": "online", 00:21:43.294 "raid_level": "raid1", 00:21:43.294 "superblock": true, 00:21:43.294 "num_base_bdevs": 2, 00:21:43.294 "num_base_bdevs_discovered": 1, 00:21:43.294 "num_base_bdevs_operational": 1, 00:21:43.294 "base_bdevs_list": [ 00:21:43.294 { 00:21:43.294 "name": null, 00:21:43.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.294 "is_configured": false, 00:21:43.294 "data_offset": 256, 00:21:43.294 "data_size": 7936 00:21:43.294 }, 00:21:43.294 { 00:21:43.294 "name": "pt2", 00:21:43.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:43.294 "is_configured": true, 00:21:43.294 "data_offset": 256, 00:21:43.294 "data_size": 7936 00:21:43.294 } 00:21:43.294 ] 00:21:43.294 }' 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:43.294 07:18:40 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:43.863 [2024-11-20 07:18:41.078975] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 96a04c3b-3cbd-495e-bece-fea77e9eb82e '!=' 96a04c3b-3cbd-495e-bece-fea77e9eb82e ']' 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86527 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86527 ']' 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86527 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86527 00:21:43.863 killing process with pid 86527 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86527' 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86527 00:21:43.863 [2024-11-20 07:18:41.146999] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:43.863 07:18:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86527 00:21:43.863 [2024-11-20 07:18:41.147124] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:43.863 [2024-11-20 07:18:41.147191] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:43.863 [2024-11-20 07:18:41.147214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:21:44.126 [2024-11-20 07:18:41.334641] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:45.063 07:18:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:21:45.063 00:21:45.063 real 0m6.428s 00:21:45.063 user 0m10.147s 00:21:45.063 sys 0m0.909s 00:21:45.063 07:18:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.063 ************************************ 00:21:45.063 END TEST raid_superblock_test_4k 00:21:45.063 ************************************ 00:21:45.063 07:18:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.423 07:18:42 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:21:45.423 07:18:42 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:21:45.423 07:18:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:45.423 07:18:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:45.423 07:18:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:45.423 ************************************ 00:21:45.423 START TEST raid_rebuild_test_sb_4k 00:21:45.423 ************************************ 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:21:45.423 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:21:45.424 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:21:45.424 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:21:45.424 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:21:45.424 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:21:45.424 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86851 00:21:45.424 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86851 00:21:45.424 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:45.424 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86851 ']' 00:21:45.424 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.424 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.424 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.424 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.424 07:18:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:45.424 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:45.424 Zero copy mechanism will not be used. 00:21:45.424 [2024-11-20 07:18:42.511092] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:21:45.424 [2024-11-20 07:18:42.511250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86851 ] 00:21:45.424 [2024-11-20 07:18:42.687040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.683 [2024-11-20 07:18:42.858254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.942 [2024-11-20 07:18:43.063441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:45.942 [2024-11-20 07:18:43.063500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.510 BaseBdev1_malloc 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.510 [2024-11-20 07:18:43.574008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:46.510 [2024-11-20 07:18:43.574239] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.510 [2024-11-20 07:18:43.574294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:21:46.510 [2024-11-20 07:18:43.574313] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.510 [2024-11-20 07:18:43.577300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.510 [2024-11-20 07:18:43.577480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:46.510 BaseBdev1 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.510 BaseBdev2_malloc 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.510 [2024-11-20 07:18:43.626642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:46.510 [2024-11-20 07:18:43.626734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.510 [2024-11-20 07:18:43.626765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:46.510 [2024-11-20 07:18:43.626790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.510 [2024-11-20 07:18:43.629714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.510 [2024-11-20 07:18:43.629767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:46.510 BaseBdev2 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.510 spare_malloc 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.510 spare_delay 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.510 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.510 [2024-11-20 07:18:43.692807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:46.510 [2024-11-20 07:18:43.693039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:46.510 [2024-11-20 07:18:43.693080] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:46.510 [2024-11-20 07:18:43.693100] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:46.510 [2024-11-20 07:18:43.696005] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:46.511 [2024-11-20 07:18:43.696167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:46.511 spare 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.511 [2024-11-20 07:18:43.700941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:46.511 [2024-11-20 07:18:43.703472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:46.511 [2024-11-20 07:18:43.703888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:21:46.511 [2024-11-20 07:18:43.703924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:46.511 [2024-11-20 07:18:43.704279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:21:46.511 [2024-11-20 07:18:43.704522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:21:46.511 [2024-11-20 07:18:43.704538] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:21:46.511 [2024-11-20 07:18:43.704841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:46.511 "name": "raid_bdev1", 00:21:46.511 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:46.511 "strip_size_kb": 0, 00:21:46.511 "state": "online", 00:21:46.511 "raid_level": "raid1", 00:21:46.511 "superblock": true, 00:21:46.511 "num_base_bdevs": 2, 00:21:46.511 "num_base_bdevs_discovered": 2, 00:21:46.511 "num_base_bdevs_operational": 2, 00:21:46.511 "base_bdevs_list": [ 00:21:46.511 { 00:21:46.511 "name": "BaseBdev1", 00:21:46.511 "uuid": "2ac53e45-d2cc-5dab-9de7-fd4615999bcc", 00:21:46.511 "is_configured": true, 00:21:46.511 "data_offset": 256, 00:21:46.511 "data_size": 7936 00:21:46.511 }, 00:21:46.511 { 00:21:46.511 "name": "BaseBdev2", 00:21:46.511 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:46.511 "is_configured": true, 00:21:46.511 "data_offset": 256, 00:21:46.511 "data_size": 7936 00:21:46.511 } 00:21:46.511 ] 00:21:46.511 }' 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:46.511 07:18:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:47.079 [2024-11-20 07:18:44.181385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:47.079 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:47.338 [2024-11-20 07:18:44.529170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:21:47.338 /dev/nbd0 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:47.338 1+0 records in 00:21:47.338 1+0 records out 00:21:47.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445414 s, 9.2 MB/s 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:47.338 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:47.339 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:21:47.339 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:21:47.339 07:18:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:21:48.273 7936+0 records in 00:21:48.273 7936+0 records out 00:21:48.273 32505856 bytes (33 MB, 31 MiB) copied, 0.930283 s, 34.9 MB/s 00:21:48.273 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:21:48.273 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:48.273 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:48.273 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:48.273 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:48.273 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:48.273 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:48.531 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:48.531 [2024-11-20 07:18:45.836449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:48.531 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:48.531 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:48.531 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:48.531 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:48.531 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:48.531 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:48.531 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:48.531 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:21:48.531 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.531 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.531 [2024-11-20 07:18:45.848609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:48.789 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.789 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:48.789 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:48.789 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:48.789 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:48.789 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:48.789 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:48.790 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:48.790 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:48.790 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:48.790 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:48.790 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:48.790 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.790 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.790 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:48.790 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.790 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:48.790 "name": "raid_bdev1", 00:21:48.790 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:48.790 "strip_size_kb": 0, 00:21:48.790 "state": "online", 00:21:48.790 "raid_level": "raid1", 00:21:48.790 "superblock": true, 00:21:48.790 "num_base_bdevs": 2, 00:21:48.790 "num_base_bdevs_discovered": 1, 00:21:48.790 "num_base_bdevs_operational": 1, 00:21:48.790 "base_bdevs_list": [ 00:21:48.790 { 00:21:48.790 "name": null, 00:21:48.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.790 "is_configured": false, 00:21:48.790 "data_offset": 0, 00:21:48.790 "data_size": 7936 00:21:48.790 }, 00:21:48.790 { 00:21:48.790 "name": "BaseBdev2", 00:21:48.790 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:48.790 "is_configured": true, 00:21:48.790 "data_offset": 256, 00:21:48.790 "data_size": 7936 00:21:48.790 } 00:21:48.790 ] 00:21:48.790 }' 00:21:48.790 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:48.790 07:18:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.357 07:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:49.357 07:18:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.357 07:18:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:49.357 [2024-11-20 07:18:46.392750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:49.357 [2024-11-20 07:18:46.409514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:21:49.357 07:18:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.357 07:18:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:21:49.357 [2024-11-20 07:18:46.412031] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:50.293 "name": "raid_bdev1", 00:21:50.293 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:50.293 "strip_size_kb": 0, 00:21:50.293 "state": "online", 00:21:50.293 "raid_level": "raid1", 00:21:50.293 "superblock": true, 00:21:50.293 "num_base_bdevs": 2, 00:21:50.293 "num_base_bdevs_discovered": 2, 00:21:50.293 "num_base_bdevs_operational": 2, 00:21:50.293 "process": { 00:21:50.293 "type": "rebuild", 00:21:50.293 "target": "spare", 00:21:50.293 "progress": { 00:21:50.293 "blocks": 2560, 00:21:50.293 "percent": 32 00:21:50.293 } 00:21:50.293 }, 00:21:50.293 "base_bdevs_list": [ 00:21:50.293 { 00:21:50.293 "name": "spare", 00:21:50.293 "uuid": "30ad11d4-41a0-5cde-a093-7eee4c6fe1d8", 00:21:50.293 "is_configured": true, 00:21:50.293 "data_offset": 256, 00:21:50.293 "data_size": 7936 00:21:50.293 }, 00:21:50.293 { 00:21:50.293 "name": "BaseBdev2", 00:21:50.293 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:50.293 "is_configured": true, 00:21:50.293 "data_offset": 256, 00:21:50.293 "data_size": 7936 00:21:50.293 } 00:21:50.293 ] 00:21:50.293 }' 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.293 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:50.293 [2024-11-20 07:18:47.561041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:50.552 [2024-11-20 07:18:47.621061] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:50.552 [2024-11-20 07:18:47.621187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.552 [2024-11-20 07:18:47.621214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:50.552 [2024-11-20 07:18:47.621229] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:50.552 "name": "raid_bdev1", 00:21:50.552 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:50.552 "strip_size_kb": 0, 00:21:50.552 "state": "online", 00:21:50.552 "raid_level": "raid1", 00:21:50.552 "superblock": true, 00:21:50.552 "num_base_bdevs": 2, 00:21:50.552 "num_base_bdevs_discovered": 1, 00:21:50.552 "num_base_bdevs_operational": 1, 00:21:50.552 "base_bdevs_list": [ 00:21:50.552 { 00:21:50.552 "name": null, 00:21:50.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.552 "is_configured": false, 00:21:50.552 "data_offset": 0, 00:21:50.552 "data_size": 7936 00:21:50.552 }, 00:21:50.552 { 00:21:50.552 "name": "BaseBdev2", 00:21:50.552 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:50.552 "is_configured": true, 00:21:50.552 "data_offset": 256, 00:21:50.552 "data_size": 7936 00:21:50.552 } 00:21:50.552 ] 00:21:50.552 }' 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:50.552 07:18:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:51.119 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:51.119 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:51.119 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:51.119 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:51.119 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:51.119 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:51.119 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.119 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:51.119 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.119 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.119 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:51.119 "name": "raid_bdev1", 00:21:51.119 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:51.119 "strip_size_kb": 0, 00:21:51.119 "state": "online", 00:21:51.119 "raid_level": "raid1", 00:21:51.119 "superblock": true, 00:21:51.119 "num_base_bdevs": 2, 00:21:51.119 "num_base_bdevs_discovered": 1, 00:21:51.119 "num_base_bdevs_operational": 1, 00:21:51.119 "base_bdevs_list": [ 00:21:51.119 { 00:21:51.119 "name": null, 00:21:51.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.119 "is_configured": false, 00:21:51.119 "data_offset": 0, 00:21:51.119 "data_size": 7936 00:21:51.120 }, 00:21:51.120 { 00:21:51.120 "name": "BaseBdev2", 00:21:51.120 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:51.120 "is_configured": true, 00:21:51.120 "data_offset": 256, 00:21:51.120 "data_size": 7936 00:21:51.120 } 00:21:51.120 ] 00:21:51.120 }' 00:21:51.120 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:51.120 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:51.120 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:51.120 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:51.120 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:51.120 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.120 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:51.120 [2024-11-20 07:18:48.297606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:51.120 [2024-11-20 07:18:48.313141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:21:51.120 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.120 07:18:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:21:51.120 [2024-11-20 07:18:48.315638] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:52.054 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.054 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.054 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:52.054 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:52.054 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.054 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.054 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.054 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.054 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.054 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.054 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.054 "name": "raid_bdev1", 00:21:52.054 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:52.054 "strip_size_kb": 0, 00:21:52.054 "state": "online", 00:21:52.054 "raid_level": "raid1", 00:21:52.054 "superblock": true, 00:21:52.054 "num_base_bdevs": 2, 00:21:52.054 "num_base_bdevs_discovered": 2, 00:21:52.054 "num_base_bdevs_operational": 2, 00:21:52.054 "process": { 00:21:52.054 "type": "rebuild", 00:21:52.054 "target": "spare", 00:21:52.054 "progress": { 00:21:52.054 "blocks": 2560, 00:21:52.054 "percent": 32 00:21:52.054 } 00:21:52.054 }, 00:21:52.054 "base_bdevs_list": [ 00:21:52.054 { 00:21:52.054 "name": "spare", 00:21:52.054 "uuid": "30ad11d4-41a0-5cde-a093-7eee4c6fe1d8", 00:21:52.054 "is_configured": true, 00:21:52.054 "data_offset": 256, 00:21:52.054 "data_size": 7936 00:21:52.054 }, 00:21:52.054 { 00:21:52.054 "name": "BaseBdev2", 00:21:52.054 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:52.054 "is_configured": true, 00:21:52.054 "data_offset": 256, 00:21:52.054 "data_size": 7936 00:21:52.054 } 00:21:52.054 ] 00:21:52.054 }' 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:52.312 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=734 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:52.312 "name": "raid_bdev1", 00:21:52.312 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:52.312 "strip_size_kb": 0, 00:21:52.312 "state": "online", 00:21:52.312 "raid_level": "raid1", 00:21:52.312 "superblock": true, 00:21:52.312 "num_base_bdevs": 2, 00:21:52.312 "num_base_bdevs_discovered": 2, 00:21:52.312 "num_base_bdevs_operational": 2, 00:21:52.312 "process": { 00:21:52.312 "type": "rebuild", 00:21:52.312 "target": "spare", 00:21:52.312 "progress": { 00:21:52.312 "blocks": 2816, 00:21:52.312 "percent": 35 00:21:52.312 } 00:21:52.312 }, 00:21:52.312 "base_bdevs_list": [ 00:21:52.312 { 00:21:52.312 "name": "spare", 00:21:52.312 "uuid": "30ad11d4-41a0-5cde-a093-7eee4c6fe1d8", 00:21:52.312 "is_configured": true, 00:21:52.312 "data_offset": 256, 00:21:52.312 "data_size": 7936 00:21:52.312 }, 00:21:52.312 { 00:21:52.312 "name": "BaseBdev2", 00:21:52.312 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:52.312 "is_configured": true, 00:21:52.312 "data_offset": 256, 00:21:52.312 "data_size": 7936 00:21:52.312 } 00:21:52.312 ] 00:21:52.312 }' 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:52.312 07:18:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:53.318 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:53.318 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:53.318 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:53.318 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:53.318 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:53.318 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:53.318 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:53.318 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:53.318 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.318 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:53.576 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.576 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:53.576 "name": "raid_bdev1", 00:21:53.576 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:53.576 "strip_size_kb": 0, 00:21:53.576 "state": "online", 00:21:53.576 "raid_level": "raid1", 00:21:53.576 "superblock": true, 00:21:53.576 "num_base_bdevs": 2, 00:21:53.576 "num_base_bdevs_discovered": 2, 00:21:53.576 "num_base_bdevs_operational": 2, 00:21:53.576 "process": { 00:21:53.576 "type": "rebuild", 00:21:53.576 "target": "spare", 00:21:53.576 "progress": { 00:21:53.576 "blocks": 5888, 00:21:53.576 "percent": 74 00:21:53.576 } 00:21:53.576 }, 00:21:53.576 "base_bdevs_list": [ 00:21:53.576 { 00:21:53.576 "name": "spare", 00:21:53.576 "uuid": "30ad11d4-41a0-5cde-a093-7eee4c6fe1d8", 00:21:53.576 "is_configured": true, 00:21:53.576 "data_offset": 256, 00:21:53.576 "data_size": 7936 00:21:53.576 }, 00:21:53.576 { 00:21:53.576 "name": "BaseBdev2", 00:21:53.576 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:53.576 "is_configured": true, 00:21:53.576 "data_offset": 256, 00:21:53.576 "data_size": 7936 00:21:53.576 } 00:21:53.576 ] 00:21:53.576 }' 00:21:53.576 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:53.576 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:53.576 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:53.576 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:53.576 07:18:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:54.144 [2024-11-20 07:18:51.438890] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:54.144 [2024-11-20 07:18:51.439010] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:54.144 [2024-11-20 07:18:51.439163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.712 "name": "raid_bdev1", 00:21:54.712 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:54.712 "strip_size_kb": 0, 00:21:54.712 "state": "online", 00:21:54.712 "raid_level": "raid1", 00:21:54.712 "superblock": true, 00:21:54.712 "num_base_bdevs": 2, 00:21:54.712 "num_base_bdevs_discovered": 2, 00:21:54.712 "num_base_bdevs_operational": 2, 00:21:54.712 "base_bdevs_list": [ 00:21:54.712 { 00:21:54.712 "name": "spare", 00:21:54.712 "uuid": "30ad11d4-41a0-5cde-a093-7eee4c6fe1d8", 00:21:54.712 "is_configured": true, 00:21:54.712 "data_offset": 256, 00:21:54.712 "data_size": 7936 00:21:54.712 }, 00:21:54.712 { 00:21:54.712 "name": "BaseBdev2", 00:21:54.712 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:54.712 "is_configured": true, 00:21:54.712 "data_offset": 256, 00:21:54.712 "data_size": 7936 00:21:54.712 } 00:21:54.712 ] 00:21:54.712 }' 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:54.712 "name": "raid_bdev1", 00:21:54.712 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:54.712 "strip_size_kb": 0, 00:21:54.712 "state": "online", 00:21:54.712 "raid_level": "raid1", 00:21:54.712 "superblock": true, 00:21:54.712 "num_base_bdevs": 2, 00:21:54.712 "num_base_bdevs_discovered": 2, 00:21:54.712 "num_base_bdevs_operational": 2, 00:21:54.712 "base_bdevs_list": [ 00:21:54.712 { 00:21:54.712 "name": "spare", 00:21:54.712 "uuid": "30ad11d4-41a0-5cde-a093-7eee4c6fe1d8", 00:21:54.712 "is_configured": true, 00:21:54.712 "data_offset": 256, 00:21:54.712 "data_size": 7936 00:21:54.712 }, 00:21:54.712 { 00:21:54.712 "name": "BaseBdev2", 00:21:54.712 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:54.712 "is_configured": true, 00:21:54.712 "data_offset": 256, 00:21:54.712 "data_size": 7936 00:21:54.712 } 00:21:54.712 ] 00:21:54.712 }' 00:21:54.712 07:18:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:54.971 "name": "raid_bdev1", 00:21:54.971 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:54.971 "strip_size_kb": 0, 00:21:54.971 "state": "online", 00:21:54.971 "raid_level": "raid1", 00:21:54.971 "superblock": true, 00:21:54.971 "num_base_bdevs": 2, 00:21:54.971 "num_base_bdevs_discovered": 2, 00:21:54.971 "num_base_bdevs_operational": 2, 00:21:54.971 "base_bdevs_list": [ 00:21:54.971 { 00:21:54.971 "name": "spare", 00:21:54.971 "uuid": "30ad11d4-41a0-5cde-a093-7eee4c6fe1d8", 00:21:54.971 "is_configured": true, 00:21:54.971 "data_offset": 256, 00:21:54.971 "data_size": 7936 00:21:54.971 }, 00:21:54.971 { 00:21:54.971 "name": "BaseBdev2", 00:21:54.971 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:54.971 "is_configured": true, 00:21:54.971 "data_offset": 256, 00:21:54.971 "data_size": 7936 00:21:54.971 } 00:21:54.971 ] 00:21:54.971 }' 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:54.971 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.539 [2024-11-20 07:18:52.579215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:55.539 [2024-11-20 07:18:52.579263] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.539 [2024-11-20 07:18:52.579363] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.539 [2024-11-20 07:18:52.579457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:55.539 [2024-11-20 07:18:52.579473] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:55.539 07:18:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:55.798 /dev/nbd0 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:55.798 1+0 records in 00:21:55.798 1+0 records out 00:21:55.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003382 s, 12.1 MB/s 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:55.798 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:21:56.056 /dev/nbd1 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:56.056 1+0 records in 00:21:56.056 1+0 records out 00:21:56.056 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543055 s, 7.5 MB/s 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:56.056 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:56.314 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:21:56.314 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:21:56.314 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:56.314 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:56.314 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:21:56.315 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:56.315 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:21:56.586 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:56.586 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:56.586 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:56.586 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:56.586 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:56.586 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:56.586 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:56.586 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:56.586 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:56.586 07:18:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.890 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:56.890 [2024-11-20 07:18:54.205831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:56.890 [2024-11-20 07:18:54.205949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:56.890 [2024-11-20 07:18:54.206006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:56.890 [2024-11-20 07:18:54.206036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:57.149 [2024-11-20 07:18:54.210311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:57.149 [2024-11-20 07:18:54.210381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:57.149 [2024-11-20 07:18:54.210577] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:57.149 [2024-11-20 07:18:54.210681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:57.149 [2024-11-20 07:18:54.211106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:57.149 spare 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.149 [2024-11-20 07:18:54.311329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:57.149 [2024-11-20 07:18:54.311408] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:21:57.149 [2024-11-20 07:18:54.311826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:21:57.149 [2024-11-20 07:18:54.312143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:57.149 [2024-11-20 07:18:54.312171] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:57.149 [2024-11-20 07:18:54.312427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.149 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.150 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.150 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.150 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.150 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.150 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.150 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.150 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.150 "name": "raid_bdev1", 00:21:57.150 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:57.150 "strip_size_kb": 0, 00:21:57.150 "state": "online", 00:21:57.150 "raid_level": "raid1", 00:21:57.150 "superblock": true, 00:21:57.150 "num_base_bdevs": 2, 00:21:57.150 "num_base_bdevs_discovered": 2, 00:21:57.150 "num_base_bdevs_operational": 2, 00:21:57.150 "base_bdevs_list": [ 00:21:57.150 { 00:21:57.150 "name": "spare", 00:21:57.150 "uuid": "30ad11d4-41a0-5cde-a093-7eee4c6fe1d8", 00:21:57.150 "is_configured": true, 00:21:57.150 "data_offset": 256, 00:21:57.150 "data_size": 7936 00:21:57.150 }, 00:21:57.150 { 00:21:57.150 "name": "BaseBdev2", 00:21:57.150 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:57.150 "is_configured": true, 00:21:57.150 "data_offset": 256, 00:21:57.150 "data_size": 7936 00:21:57.150 } 00:21:57.150 ] 00:21:57.150 }' 00:21:57.150 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.150 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:57.718 "name": "raid_bdev1", 00:21:57.718 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:57.718 "strip_size_kb": 0, 00:21:57.718 "state": "online", 00:21:57.718 "raid_level": "raid1", 00:21:57.718 "superblock": true, 00:21:57.718 "num_base_bdevs": 2, 00:21:57.718 "num_base_bdevs_discovered": 2, 00:21:57.718 "num_base_bdevs_operational": 2, 00:21:57.718 "base_bdevs_list": [ 00:21:57.718 { 00:21:57.718 "name": "spare", 00:21:57.718 "uuid": "30ad11d4-41a0-5cde-a093-7eee4c6fe1d8", 00:21:57.718 "is_configured": true, 00:21:57.718 "data_offset": 256, 00:21:57.718 "data_size": 7936 00:21:57.718 }, 00:21:57.718 { 00:21:57.718 "name": "BaseBdev2", 00:21:57.718 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:57.718 "is_configured": true, 00:21:57.718 "data_offset": 256, 00:21:57.718 "data_size": 7936 00:21:57.718 } 00:21:57.718 ] 00:21:57.718 }' 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:57.718 07:18:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:57.718 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:57.718 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.718 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:57.718 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.718 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.977 [2024-11-20 07:18:55.082936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:57.977 "name": "raid_bdev1", 00:21:57.977 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:57.977 "strip_size_kb": 0, 00:21:57.977 "state": "online", 00:21:57.977 "raid_level": "raid1", 00:21:57.977 "superblock": true, 00:21:57.977 "num_base_bdevs": 2, 00:21:57.977 "num_base_bdevs_discovered": 1, 00:21:57.977 "num_base_bdevs_operational": 1, 00:21:57.977 "base_bdevs_list": [ 00:21:57.977 { 00:21:57.977 "name": null, 00:21:57.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.977 "is_configured": false, 00:21:57.977 "data_offset": 0, 00:21:57.977 "data_size": 7936 00:21:57.977 }, 00:21:57.977 { 00:21:57.977 "name": "BaseBdev2", 00:21:57.977 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:57.977 "is_configured": true, 00:21:57.977 "data_offset": 256, 00:21:57.977 "data_size": 7936 00:21:57.977 } 00:21:57.977 ] 00:21:57.977 }' 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:57.977 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.545 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:58.545 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.545 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:58.545 [2024-11-20 07:18:55.643111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:58.545 [2024-11-20 07:18:55.643377] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:58.545 [2024-11-20 07:18:55.643405] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:58.545 [2024-11-20 07:18:55.643453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:58.545 [2024-11-20 07:18:55.658729] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:21:58.545 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.545 07:18:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:58.545 [2024-11-20 07:18:55.661281] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:59.481 "name": "raid_bdev1", 00:21:59.481 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:59.481 "strip_size_kb": 0, 00:21:59.481 "state": "online", 00:21:59.481 "raid_level": "raid1", 00:21:59.481 "superblock": true, 00:21:59.481 "num_base_bdevs": 2, 00:21:59.481 "num_base_bdevs_discovered": 2, 00:21:59.481 "num_base_bdevs_operational": 2, 00:21:59.481 "process": { 00:21:59.481 "type": "rebuild", 00:21:59.481 "target": "spare", 00:21:59.481 "progress": { 00:21:59.481 "blocks": 2560, 00:21:59.481 "percent": 32 00:21:59.481 } 00:21:59.481 }, 00:21:59.481 "base_bdevs_list": [ 00:21:59.481 { 00:21:59.481 "name": "spare", 00:21:59.481 "uuid": "30ad11d4-41a0-5cde-a093-7eee4c6fe1d8", 00:21:59.481 "is_configured": true, 00:21:59.481 "data_offset": 256, 00:21:59.481 "data_size": 7936 00:21:59.481 }, 00:21:59.481 { 00:21:59.481 "name": "BaseBdev2", 00:21:59.481 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:59.481 "is_configured": true, 00:21:59.481 "data_offset": 256, 00:21:59.481 "data_size": 7936 00:21:59.481 } 00:21:59.481 ] 00:21:59.481 }' 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:59.481 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.741 [2024-11-20 07:18:56.806878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:59.741 [2024-11-20 07:18:56.870094] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:59.741 [2024-11-20 07:18:56.870220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:59.741 [2024-11-20 07:18:56.870245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:59.741 [2024-11-20 07:18:56.870259] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:59.741 "name": "raid_bdev1", 00:21:59.741 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:21:59.741 "strip_size_kb": 0, 00:21:59.741 "state": "online", 00:21:59.741 "raid_level": "raid1", 00:21:59.741 "superblock": true, 00:21:59.741 "num_base_bdevs": 2, 00:21:59.741 "num_base_bdevs_discovered": 1, 00:21:59.741 "num_base_bdevs_operational": 1, 00:21:59.741 "base_bdevs_list": [ 00:21:59.741 { 00:21:59.741 "name": null, 00:21:59.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:59.741 "is_configured": false, 00:21:59.741 "data_offset": 0, 00:21:59.741 "data_size": 7936 00:21:59.741 }, 00:21:59.741 { 00:21:59.741 "name": "BaseBdev2", 00:21:59.741 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:21:59.741 "is_configured": true, 00:21:59.741 "data_offset": 256, 00:21:59.741 "data_size": 7936 00:21:59.741 } 00:21:59.741 ] 00:21:59.741 }' 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:59.741 07:18:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.311 07:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:00.311 07:18:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.311 07:18:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:00.311 [2024-11-20 07:18:57.401845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:00.311 [2024-11-20 07:18:57.401954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:00.311 [2024-11-20 07:18:57.401988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:00.311 [2024-11-20 07:18:57.402006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:00.311 [2024-11-20 07:18:57.402612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:00.311 [2024-11-20 07:18:57.402674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:00.311 [2024-11-20 07:18:57.402824] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:00.311 [2024-11-20 07:18:57.402854] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:00.311 [2024-11-20 07:18:57.402891] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:00.311 [2024-11-20 07:18:57.402937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:00.311 [2024-11-20 07:18:57.419810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:22:00.311 spare 00:22:00.311 07:18:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.311 07:18:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:00.311 [2024-11-20 07:18:57.422637] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:01.261 "name": "raid_bdev1", 00:22:01.261 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:22:01.261 "strip_size_kb": 0, 00:22:01.261 "state": "online", 00:22:01.261 "raid_level": "raid1", 00:22:01.261 "superblock": true, 00:22:01.261 "num_base_bdevs": 2, 00:22:01.261 "num_base_bdevs_discovered": 2, 00:22:01.261 "num_base_bdevs_operational": 2, 00:22:01.261 "process": { 00:22:01.261 "type": "rebuild", 00:22:01.261 "target": "spare", 00:22:01.261 "progress": { 00:22:01.261 "blocks": 2560, 00:22:01.261 "percent": 32 00:22:01.261 } 00:22:01.261 }, 00:22:01.261 "base_bdevs_list": [ 00:22:01.261 { 00:22:01.261 "name": "spare", 00:22:01.261 "uuid": "30ad11d4-41a0-5cde-a093-7eee4c6fe1d8", 00:22:01.261 "is_configured": true, 00:22:01.261 "data_offset": 256, 00:22:01.261 "data_size": 7936 00:22:01.261 }, 00:22:01.261 { 00:22:01.261 "name": "BaseBdev2", 00:22:01.261 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:22:01.261 "is_configured": true, 00:22:01.261 "data_offset": 256, 00:22:01.261 "data_size": 7936 00:22:01.261 } 00:22:01.261 ] 00:22:01.261 }' 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:01.261 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:01.262 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.262 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.262 [2024-11-20 07:18:58.579771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:01.520 [2024-11-20 07:18:58.631673] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:01.520 [2024-11-20 07:18:58.631793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:01.520 [2024-11-20 07:18:58.631823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:01.520 [2024-11-20 07:18:58.631835] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:01.520 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.520 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:01.520 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:01.520 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:01.520 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:01.520 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:01.520 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:01.520 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:01.520 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:01.521 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:01.521 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:01.521 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:01.521 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.521 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.521 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:01.521 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.521 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:01.521 "name": "raid_bdev1", 00:22:01.521 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:22:01.521 "strip_size_kb": 0, 00:22:01.521 "state": "online", 00:22:01.521 "raid_level": "raid1", 00:22:01.521 "superblock": true, 00:22:01.521 "num_base_bdevs": 2, 00:22:01.521 "num_base_bdevs_discovered": 1, 00:22:01.521 "num_base_bdevs_operational": 1, 00:22:01.521 "base_bdevs_list": [ 00:22:01.521 { 00:22:01.521 "name": null, 00:22:01.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.521 "is_configured": false, 00:22:01.521 "data_offset": 0, 00:22:01.521 "data_size": 7936 00:22:01.521 }, 00:22:01.521 { 00:22:01.521 "name": "BaseBdev2", 00:22:01.521 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:22:01.521 "is_configured": true, 00:22:01.521 "data_offset": 256, 00:22:01.521 "data_size": 7936 00:22:01.521 } 00:22:01.521 ] 00:22:01.521 }' 00:22:01.521 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:01.521 07:18:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:02.088 "name": "raid_bdev1", 00:22:02.088 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:22:02.088 "strip_size_kb": 0, 00:22:02.088 "state": "online", 00:22:02.088 "raid_level": "raid1", 00:22:02.088 "superblock": true, 00:22:02.088 "num_base_bdevs": 2, 00:22:02.088 "num_base_bdevs_discovered": 1, 00:22:02.088 "num_base_bdevs_operational": 1, 00:22:02.088 "base_bdevs_list": [ 00:22:02.088 { 00:22:02.088 "name": null, 00:22:02.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.088 "is_configured": false, 00:22:02.088 "data_offset": 0, 00:22:02.088 "data_size": 7936 00:22:02.088 }, 00:22:02.088 { 00:22:02.088 "name": "BaseBdev2", 00:22:02.088 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:22:02.088 "is_configured": true, 00:22:02.088 "data_offset": 256, 00:22:02.088 "data_size": 7936 00:22:02.088 } 00:22:02.088 ] 00:22:02.088 }' 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:02.088 [2024-11-20 07:18:59.320012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:02.088 [2024-11-20 07:18:59.320087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.088 [2024-11-20 07:18:59.320121] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:02.088 [2024-11-20 07:18:59.320147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.088 [2024-11-20 07:18:59.320733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.088 [2024-11-20 07:18:59.320769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:02.088 [2024-11-20 07:18:59.320898] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:02.088 [2024-11-20 07:18:59.320922] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:02.088 [2024-11-20 07:18:59.320939] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:02.088 [2024-11-20 07:18:59.320953] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:02.088 BaseBdev1 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.088 07:18:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.022 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.281 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.281 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:03.281 "name": "raid_bdev1", 00:22:03.281 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:22:03.281 "strip_size_kb": 0, 00:22:03.281 "state": "online", 00:22:03.281 "raid_level": "raid1", 00:22:03.281 "superblock": true, 00:22:03.281 "num_base_bdevs": 2, 00:22:03.281 "num_base_bdevs_discovered": 1, 00:22:03.281 "num_base_bdevs_operational": 1, 00:22:03.281 "base_bdevs_list": [ 00:22:03.281 { 00:22:03.281 "name": null, 00:22:03.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.281 "is_configured": false, 00:22:03.281 "data_offset": 0, 00:22:03.281 "data_size": 7936 00:22:03.281 }, 00:22:03.281 { 00:22:03.281 "name": "BaseBdev2", 00:22:03.281 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:22:03.281 "is_configured": true, 00:22:03.281 "data_offset": 256, 00:22:03.281 "data_size": 7936 00:22:03.281 } 00:22:03.281 ] 00:22:03.281 }' 00:22:03.281 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:03.281 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.539 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:03.539 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:03.539 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:03.539 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:03.539 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:03.539 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:03.539 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:03.539 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.539 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:03.813 "name": "raid_bdev1", 00:22:03.813 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:22:03.813 "strip_size_kb": 0, 00:22:03.813 "state": "online", 00:22:03.813 "raid_level": "raid1", 00:22:03.813 "superblock": true, 00:22:03.813 "num_base_bdevs": 2, 00:22:03.813 "num_base_bdevs_discovered": 1, 00:22:03.813 "num_base_bdevs_operational": 1, 00:22:03.813 "base_bdevs_list": [ 00:22:03.813 { 00:22:03.813 "name": null, 00:22:03.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.813 "is_configured": false, 00:22:03.813 "data_offset": 0, 00:22:03.813 "data_size": 7936 00:22:03.813 }, 00:22:03.813 { 00:22:03.813 "name": "BaseBdev2", 00:22:03.813 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:22:03.813 "is_configured": true, 00:22:03.813 "data_offset": 256, 00:22:03.813 "data_size": 7936 00:22:03.813 } 00:22:03.813 ] 00:22:03.813 }' 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.813 07:19:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:03.813 [2024-11-20 07:19:00.996528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:03.813 [2024-11-20 07:19:00.997757] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:03.813 [2024-11-20 07:19:00.997788] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:03.813 request: 00:22:03.813 { 00:22:03.813 "base_bdev": "BaseBdev1", 00:22:03.813 "raid_bdev": "raid_bdev1", 00:22:03.813 "method": "bdev_raid_add_base_bdev", 00:22:03.813 "req_id": 1 00:22:03.813 } 00:22:03.813 Got JSON-RPC error response 00:22:03.813 response: 00:22:03.813 { 00:22:03.813 "code": -22, 00:22:03.813 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:03.813 } 00:22:03.813 07:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:03.813 07:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:22:03.813 07:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:03.813 07:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:03.813 07:19:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:03.813 07:19:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:04.748 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:04.749 "name": "raid_bdev1", 00:22:04.749 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:22:04.749 "strip_size_kb": 0, 00:22:04.749 "state": "online", 00:22:04.749 "raid_level": "raid1", 00:22:04.749 "superblock": true, 00:22:04.749 "num_base_bdevs": 2, 00:22:04.749 "num_base_bdevs_discovered": 1, 00:22:04.749 "num_base_bdevs_operational": 1, 00:22:04.749 "base_bdevs_list": [ 00:22:04.749 { 00:22:04.749 "name": null, 00:22:04.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.749 "is_configured": false, 00:22:04.749 "data_offset": 0, 00:22:04.749 "data_size": 7936 00:22:04.749 }, 00:22:04.749 { 00:22:04.749 "name": "BaseBdev2", 00:22:04.749 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:22:04.749 "is_configured": true, 00:22:04.749 "data_offset": 256, 00:22:04.749 "data_size": 7936 00:22:04.749 } 00:22:04.749 ] 00:22:04.749 }' 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:04.749 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:05.315 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:05.315 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:05.316 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:05.316 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:05.316 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:05.316 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.316 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:05.316 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.316 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:05.316 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.316 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:05.316 "name": "raid_bdev1", 00:22:05.316 "uuid": "20085e12-5e39-4960-a221-95cd2e72e63e", 00:22:05.316 "strip_size_kb": 0, 00:22:05.316 "state": "online", 00:22:05.316 "raid_level": "raid1", 00:22:05.316 "superblock": true, 00:22:05.316 "num_base_bdevs": 2, 00:22:05.316 "num_base_bdevs_discovered": 1, 00:22:05.316 "num_base_bdevs_operational": 1, 00:22:05.316 "base_bdevs_list": [ 00:22:05.316 { 00:22:05.316 "name": null, 00:22:05.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.316 "is_configured": false, 00:22:05.316 "data_offset": 0, 00:22:05.316 "data_size": 7936 00:22:05.316 }, 00:22:05.316 { 00:22:05.316 "name": "BaseBdev2", 00:22:05.316 "uuid": "871b5689-34a2-5a8b-a1d6-01fdd5610c04", 00:22:05.316 "is_configured": true, 00:22:05.316 "data_offset": 256, 00:22:05.316 "data_size": 7936 00:22:05.316 } 00:22:05.316 ] 00:22:05.316 }' 00:22:05.316 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:05.316 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:05.316 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:05.573 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:05.573 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86851 00:22:05.573 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86851 ']' 00:22:05.573 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86851 00:22:05.573 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:22:05.573 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:05.573 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86851 00:22:05.573 killing process with pid 86851 00:22:05.573 Received shutdown signal, test time was about 60.000000 seconds 00:22:05.573 00:22:05.573 Latency(us) 00:22:05.573 [2024-11-20T07:19:02.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.573 [2024-11-20T07:19:02.893Z] =================================================================================================================== 00:22:05.573 [2024-11-20T07:19:02.893Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:05.573 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:05.573 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:05.573 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86851' 00:22:05.573 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86851 00:22:05.573 [2024-11-20 07:19:02.698971] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:05.573 07:19:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86851 00:22:05.573 [2024-11-20 07:19:02.699125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:05.573 [2024-11-20 07:19:02.699192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:05.573 [2024-11-20 07:19:02.699211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:05.830 [2024-11-20 07:19:02.960389] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:06.764 ************************************ 00:22:06.764 END TEST raid_rebuild_test_sb_4k 00:22:06.764 ************************************ 00:22:06.764 07:19:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:22:06.764 00:22:06.764 real 0m21.549s 00:22:06.764 user 0m29.200s 00:22:06.764 sys 0m2.547s 00:22:06.764 07:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.764 07:19:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:22:06.764 07:19:04 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:22:06.764 07:19:04 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:22:06.764 07:19:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:06.764 07:19:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.764 07:19:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:06.764 ************************************ 00:22:06.764 START TEST raid_state_function_test_sb_md_separate 00:22:06.764 ************************************ 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:06.764 Process raid pid: 87554 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87554 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87554' 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87554 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87554 ']' 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:06.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:06.764 07:19:04 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:07.051 [2024-11-20 07:19:04.133677] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:22:07.051 [2024-11-20 07:19:04.133858] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.051 [2024-11-20 07:19:04.319438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.310 [2024-11-20 07:19:04.450805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.569 [2024-11-20 07:19:04.656363] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.569 [2024-11-20 07:19:04.656416] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:07.828 [2024-11-20 07:19:05.073755] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:07.828 [2024-11-20 07:19:05.073817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:07.828 [2024-11-20 07:19:05.073834] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:07.828 [2024-11-20 07:19:05.073851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:07.828 "name": "Existed_Raid", 00:22:07.828 "uuid": "2731de81-2bb6-4332-8fb3-83578e092074", 00:22:07.828 "strip_size_kb": 0, 00:22:07.828 "state": "configuring", 00:22:07.828 "raid_level": "raid1", 00:22:07.828 "superblock": true, 00:22:07.828 "num_base_bdevs": 2, 00:22:07.828 "num_base_bdevs_discovered": 0, 00:22:07.828 "num_base_bdevs_operational": 2, 00:22:07.828 "base_bdevs_list": [ 00:22:07.828 { 00:22:07.828 "name": "BaseBdev1", 00:22:07.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.828 "is_configured": false, 00:22:07.828 "data_offset": 0, 00:22:07.828 "data_size": 0 00:22:07.828 }, 00:22:07.828 { 00:22:07.828 "name": "BaseBdev2", 00:22:07.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.828 "is_configured": false, 00:22:07.828 "data_offset": 0, 00:22:07.828 "data_size": 0 00:22:07.828 } 00:22:07.828 ] 00:22:07.828 }' 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:07.828 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.396 [2024-11-20 07:19:05.561865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:08.396 [2024-11-20 07:19:05.561906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.396 [2024-11-20 07:19:05.569838] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:08.396 [2024-11-20 07:19:05.570036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:08.396 [2024-11-20 07:19:05.570063] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:08.396 [2024-11-20 07:19:05.570084] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.396 [2024-11-20 07:19:05.615444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:08.396 BaseBdev1 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.396 [ 00:22:08.396 { 00:22:08.396 "name": "BaseBdev1", 00:22:08.396 "aliases": [ 00:22:08.396 "c9e6dc42-2e03-4192-bf6c-a0badc564835" 00:22:08.396 ], 00:22:08.396 "product_name": "Malloc disk", 00:22:08.396 "block_size": 4096, 00:22:08.396 "num_blocks": 8192, 00:22:08.396 "uuid": "c9e6dc42-2e03-4192-bf6c-a0badc564835", 00:22:08.396 "md_size": 32, 00:22:08.396 "md_interleave": false, 00:22:08.396 "dif_type": 0, 00:22:08.396 "assigned_rate_limits": { 00:22:08.396 "rw_ios_per_sec": 0, 00:22:08.396 "rw_mbytes_per_sec": 0, 00:22:08.396 "r_mbytes_per_sec": 0, 00:22:08.396 "w_mbytes_per_sec": 0 00:22:08.396 }, 00:22:08.396 "claimed": true, 00:22:08.396 "claim_type": "exclusive_write", 00:22:08.396 "zoned": false, 00:22:08.396 "supported_io_types": { 00:22:08.396 "read": true, 00:22:08.396 "write": true, 00:22:08.396 "unmap": true, 00:22:08.396 "flush": true, 00:22:08.396 "reset": true, 00:22:08.396 "nvme_admin": false, 00:22:08.396 "nvme_io": false, 00:22:08.396 "nvme_io_md": false, 00:22:08.396 "write_zeroes": true, 00:22:08.396 "zcopy": true, 00:22:08.396 "get_zone_info": false, 00:22:08.396 "zone_management": false, 00:22:08.396 "zone_append": false, 00:22:08.396 "compare": false, 00:22:08.396 "compare_and_write": false, 00:22:08.396 "abort": true, 00:22:08.396 "seek_hole": false, 00:22:08.396 "seek_data": false, 00:22:08.396 "copy": true, 00:22:08.396 "nvme_iov_md": false 00:22:08.396 }, 00:22:08.396 "memory_domains": [ 00:22:08.396 { 00:22:08.396 "dma_device_id": "system", 00:22:08.396 "dma_device_type": 1 00:22:08.396 }, 00:22:08.396 { 00:22:08.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.396 "dma_device_type": 2 00:22:08.396 } 00:22:08.396 ], 00:22:08.396 "driver_specific": {} 00:22:08.396 } 00:22:08.396 ] 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.396 "name": "Existed_Raid", 00:22:08.396 "uuid": "cdfa6c50-09f0-453a-8b65-2dbb5e4d78f8", 00:22:08.396 "strip_size_kb": 0, 00:22:08.396 "state": "configuring", 00:22:08.396 "raid_level": "raid1", 00:22:08.396 "superblock": true, 00:22:08.396 "num_base_bdevs": 2, 00:22:08.396 "num_base_bdevs_discovered": 1, 00:22:08.396 "num_base_bdevs_operational": 2, 00:22:08.396 "base_bdevs_list": [ 00:22:08.396 { 00:22:08.396 "name": "BaseBdev1", 00:22:08.396 "uuid": "c9e6dc42-2e03-4192-bf6c-a0badc564835", 00:22:08.396 "is_configured": true, 00:22:08.396 "data_offset": 256, 00:22:08.396 "data_size": 7936 00:22:08.396 }, 00:22:08.396 { 00:22:08.396 "name": "BaseBdev2", 00:22:08.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.396 "is_configured": false, 00:22:08.396 "data_offset": 0, 00:22:08.396 "data_size": 0 00:22:08.396 } 00:22:08.396 ] 00:22:08.396 }' 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.396 07:19:05 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.963 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:08.963 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.963 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.963 [2024-11-20 07:19:06.127670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:08.963 [2024-11-20 07:19:06.127729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:08.963 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.963 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:08.963 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.963 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.963 [2024-11-20 07:19:06.135706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:08.963 [2024-11-20 07:19:06.138332] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:08.963 [2024-11-20 07:19:06.138544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:08.963 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.963 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:08.963 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:08.964 "name": "Existed_Raid", 00:22:08.964 "uuid": "814a4f82-ba13-419b-ba36-6e3fdc16de37", 00:22:08.964 "strip_size_kb": 0, 00:22:08.964 "state": "configuring", 00:22:08.964 "raid_level": "raid1", 00:22:08.964 "superblock": true, 00:22:08.964 "num_base_bdevs": 2, 00:22:08.964 "num_base_bdevs_discovered": 1, 00:22:08.964 "num_base_bdevs_operational": 2, 00:22:08.964 "base_bdevs_list": [ 00:22:08.964 { 00:22:08.964 "name": "BaseBdev1", 00:22:08.964 "uuid": "c9e6dc42-2e03-4192-bf6c-a0badc564835", 00:22:08.964 "is_configured": true, 00:22:08.964 "data_offset": 256, 00:22:08.964 "data_size": 7936 00:22:08.964 }, 00:22:08.964 { 00:22:08.964 "name": "BaseBdev2", 00:22:08.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:08.964 "is_configured": false, 00:22:08.964 "data_offset": 0, 00:22:08.964 "data_size": 0 00:22:08.964 } 00:22:08.964 ] 00:22:08.964 }' 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:08.964 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.531 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:22:09.531 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.532 [2024-11-20 07:19:06.683905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:09.532 [2024-11-20 07:19:06.684244] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:09.532 [2024-11-20 07:19:06.684264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:09.532 [2024-11-20 07:19:06.684363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:09.532 [2024-11-20 07:19:06.684519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:09.532 [2024-11-20 07:19:06.684538] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:09.532 BaseBdev2 00:22:09.532 [2024-11-20 07:19:06.684662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.532 [ 00:22:09.532 { 00:22:09.532 "name": "BaseBdev2", 00:22:09.532 "aliases": [ 00:22:09.532 "9d4fb25a-8acd-4477-93e2-b805d85d33ba" 00:22:09.532 ], 00:22:09.532 "product_name": "Malloc disk", 00:22:09.532 "block_size": 4096, 00:22:09.532 "num_blocks": 8192, 00:22:09.532 "uuid": "9d4fb25a-8acd-4477-93e2-b805d85d33ba", 00:22:09.532 "md_size": 32, 00:22:09.532 "md_interleave": false, 00:22:09.532 "dif_type": 0, 00:22:09.532 "assigned_rate_limits": { 00:22:09.532 "rw_ios_per_sec": 0, 00:22:09.532 "rw_mbytes_per_sec": 0, 00:22:09.532 "r_mbytes_per_sec": 0, 00:22:09.532 "w_mbytes_per_sec": 0 00:22:09.532 }, 00:22:09.532 "claimed": true, 00:22:09.532 "claim_type": "exclusive_write", 00:22:09.532 "zoned": false, 00:22:09.532 "supported_io_types": { 00:22:09.532 "read": true, 00:22:09.532 "write": true, 00:22:09.532 "unmap": true, 00:22:09.532 "flush": true, 00:22:09.532 "reset": true, 00:22:09.532 "nvme_admin": false, 00:22:09.532 "nvme_io": false, 00:22:09.532 "nvme_io_md": false, 00:22:09.532 "write_zeroes": true, 00:22:09.532 "zcopy": true, 00:22:09.532 "get_zone_info": false, 00:22:09.532 "zone_management": false, 00:22:09.532 "zone_append": false, 00:22:09.532 "compare": false, 00:22:09.532 "compare_and_write": false, 00:22:09.532 "abort": true, 00:22:09.532 "seek_hole": false, 00:22:09.532 "seek_data": false, 00:22:09.532 "copy": true, 00:22:09.532 "nvme_iov_md": false 00:22:09.532 }, 00:22:09.532 "memory_domains": [ 00:22:09.532 { 00:22:09.532 "dma_device_id": "system", 00:22:09.532 "dma_device_type": 1 00:22:09.532 }, 00:22:09.532 { 00:22:09.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.532 "dma_device_type": 2 00:22:09.532 } 00:22:09.532 ], 00:22:09.532 "driver_specific": {} 00:22:09.532 } 00:22:09.532 ] 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:09.532 "name": "Existed_Raid", 00:22:09.532 "uuid": "814a4f82-ba13-419b-ba36-6e3fdc16de37", 00:22:09.532 "strip_size_kb": 0, 00:22:09.532 "state": "online", 00:22:09.532 "raid_level": "raid1", 00:22:09.532 "superblock": true, 00:22:09.532 "num_base_bdevs": 2, 00:22:09.532 "num_base_bdevs_discovered": 2, 00:22:09.532 "num_base_bdevs_operational": 2, 00:22:09.532 "base_bdevs_list": [ 00:22:09.532 { 00:22:09.532 "name": "BaseBdev1", 00:22:09.532 "uuid": "c9e6dc42-2e03-4192-bf6c-a0badc564835", 00:22:09.532 "is_configured": true, 00:22:09.532 "data_offset": 256, 00:22:09.532 "data_size": 7936 00:22:09.532 }, 00:22:09.532 { 00:22:09.532 "name": "BaseBdev2", 00:22:09.532 "uuid": "9d4fb25a-8acd-4477-93e2-b805d85d33ba", 00:22:09.532 "is_configured": true, 00:22:09.532 "data_offset": 256, 00:22:09.532 "data_size": 7936 00:22:09.532 } 00:22:09.532 ] 00:22:09.532 }' 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:09.532 07:19:06 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.100 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:10.100 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:10.100 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:10.100 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:10.100 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:10.100 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:10.100 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:10.100 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:10.100 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.100 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.100 [2024-11-20 07:19:07.232544] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:10.100 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.100 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:10.100 "name": "Existed_Raid", 00:22:10.100 "aliases": [ 00:22:10.100 "814a4f82-ba13-419b-ba36-6e3fdc16de37" 00:22:10.100 ], 00:22:10.100 "product_name": "Raid Volume", 00:22:10.100 "block_size": 4096, 00:22:10.100 "num_blocks": 7936, 00:22:10.100 "uuid": "814a4f82-ba13-419b-ba36-6e3fdc16de37", 00:22:10.100 "md_size": 32, 00:22:10.100 "md_interleave": false, 00:22:10.100 "dif_type": 0, 00:22:10.100 "assigned_rate_limits": { 00:22:10.100 "rw_ios_per_sec": 0, 00:22:10.100 "rw_mbytes_per_sec": 0, 00:22:10.100 "r_mbytes_per_sec": 0, 00:22:10.100 "w_mbytes_per_sec": 0 00:22:10.100 }, 00:22:10.100 "claimed": false, 00:22:10.100 "zoned": false, 00:22:10.100 "supported_io_types": { 00:22:10.100 "read": true, 00:22:10.100 "write": true, 00:22:10.100 "unmap": false, 00:22:10.100 "flush": false, 00:22:10.100 "reset": true, 00:22:10.100 "nvme_admin": false, 00:22:10.100 "nvme_io": false, 00:22:10.100 "nvme_io_md": false, 00:22:10.100 "write_zeroes": true, 00:22:10.100 "zcopy": false, 00:22:10.100 "get_zone_info": false, 00:22:10.100 "zone_management": false, 00:22:10.100 "zone_append": false, 00:22:10.100 "compare": false, 00:22:10.100 "compare_and_write": false, 00:22:10.100 "abort": false, 00:22:10.100 "seek_hole": false, 00:22:10.100 "seek_data": false, 00:22:10.100 "copy": false, 00:22:10.100 "nvme_iov_md": false 00:22:10.100 }, 00:22:10.100 "memory_domains": [ 00:22:10.100 { 00:22:10.100 "dma_device_id": "system", 00:22:10.100 "dma_device_type": 1 00:22:10.100 }, 00:22:10.100 { 00:22:10.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.100 "dma_device_type": 2 00:22:10.100 }, 00:22:10.100 { 00:22:10.100 "dma_device_id": "system", 00:22:10.100 "dma_device_type": 1 00:22:10.100 }, 00:22:10.100 { 00:22:10.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:10.100 "dma_device_type": 2 00:22:10.101 } 00:22:10.101 ], 00:22:10.101 "driver_specific": { 00:22:10.101 "raid": { 00:22:10.101 "uuid": "814a4f82-ba13-419b-ba36-6e3fdc16de37", 00:22:10.101 "strip_size_kb": 0, 00:22:10.101 "state": "online", 00:22:10.101 "raid_level": "raid1", 00:22:10.101 "superblock": true, 00:22:10.101 "num_base_bdevs": 2, 00:22:10.101 "num_base_bdevs_discovered": 2, 00:22:10.101 "num_base_bdevs_operational": 2, 00:22:10.101 "base_bdevs_list": [ 00:22:10.101 { 00:22:10.101 "name": "BaseBdev1", 00:22:10.101 "uuid": "c9e6dc42-2e03-4192-bf6c-a0badc564835", 00:22:10.101 "is_configured": true, 00:22:10.101 "data_offset": 256, 00:22:10.101 "data_size": 7936 00:22:10.101 }, 00:22:10.101 { 00:22:10.101 "name": "BaseBdev2", 00:22:10.101 "uuid": "9d4fb25a-8acd-4477-93e2-b805d85d33ba", 00:22:10.101 "is_configured": true, 00:22:10.101 "data_offset": 256, 00:22:10.101 "data_size": 7936 00:22:10.101 } 00:22:10.101 ] 00:22:10.101 } 00:22:10.101 } 00:22:10.101 }' 00:22:10.101 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:10.101 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:10.101 BaseBdev2' 00:22:10.101 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.101 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:10.101 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.101 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:10.101 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.101 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.101 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.101 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.361 [2024-11-20 07:19:07.476271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:10.361 "name": "Existed_Raid", 00:22:10.361 "uuid": "814a4f82-ba13-419b-ba36-6e3fdc16de37", 00:22:10.361 "strip_size_kb": 0, 00:22:10.361 "state": "online", 00:22:10.361 "raid_level": "raid1", 00:22:10.361 "superblock": true, 00:22:10.361 "num_base_bdevs": 2, 00:22:10.361 "num_base_bdevs_discovered": 1, 00:22:10.361 "num_base_bdevs_operational": 1, 00:22:10.361 "base_bdevs_list": [ 00:22:10.361 { 00:22:10.361 "name": null, 00:22:10.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.361 "is_configured": false, 00:22:10.361 "data_offset": 0, 00:22:10.361 "data_size": 7936 00:22:10.361 }, 00:22:10.361 { 00:22:10.361 "name": "BaseBdev2", 00:22:10.361 "uuid": "9d4fb25a-8acd-4477-93e2-b805d85d33ba", 00:22:10.361 "is_configured": true, 00:22:10.361 "data_offset": 256, 00:22:10.361 "data_size": 7936 00:22:10.361 } 00:22:10.361 ] 00:22:10.361 }' 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:10.361 07:19:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.929 [2024-11-20 07:19:08.123838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:10.929 [2024-11-20 07:19:08.123982] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:10.929 [2024-11-20 07:19:08.216204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:10.929 [2024-11-20 07:19:08.216270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:10.929 [2024-11-20 07:19:08.216291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:10.929 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87554 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87554 ']' 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87554 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87554 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:11.188 killing process with pid 87554 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87554' 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87554 00:22:11.188 [2024-11-20 07:19:08.305164] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:11.188 07:19:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87554 00:22:11.188 [2024-11-20 07:19:08.319733] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:12.122 ************************************ 00:22:12.122 END TEST raid_state_function_test_sb_md_separate 00:22:12.122 ************************************ 00:22:12.122 07:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:22:12.122 00:22:12.122 real 0m5.305s 00:22:12.122 user 0m7.938s 00:22:12.122 sys 0m0.816s 00:22:12.122 07:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.122 07:19:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.122 07:19:09 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:22:12.122 07:19:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:12.122 07:19:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.122 07:19:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:12.122 ************************************ 00:22:12.122 START TEST raid_superblock_test_md_separate 00:22:12.122 ************************************ 00:22:12.122 07:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:12.122 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:12.122 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:12.122 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:12.122 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:12.122 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:12.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87801 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87801 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87801 ']' 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.123 07:19:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:12.435 [2024-11-20 07:19:09.482976] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:22:12.435 [2024-11-20 07:19:09.483382] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87801 ] 00:22:12.435 [2024-11-20 07:19:09.666532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.693 [2024-11-20 07:19:09.795280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.693 [2024-11-20 07:19:09.999431] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:12.693 [2024-11-20 07:19:09.999627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.261 malloc1 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.261 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.521 [2024-11-20 07:19:10.579891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:13.521 [2024-11-20 07:19:10.579986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.521 [2024-11-20 07:19:10.580020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:13.521 [2024-11-20 07:19:10.580037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.521 [2024-11-20 07:19:10.582540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.521 [2024-11-20 07:19:10.582586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:13.521 pt1 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.521 malloc2 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.521 [2024-11-20 07:19:10.638574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:13.521 [2024-11-20 07:19:10.638784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.521 [2024-11-20 07:19:10.638826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:13.521 [2024-11-20 07:19:10.638842] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.521 [2024-11-20 07:19:10.641343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.521 [2024-11-20 07:19:10.641517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:13.521 pt2 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.521 [2024-11-20 07:19:10.646592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:13.521 [2024-11-20 07:19:10.649117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:13.521 [2024-11-20 07:19:10.649477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:13.521 [2024-11-20 07:19:10.649609] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:13.521 [2024-11-20 07:19:10.649809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:13.521 [2024-11-20 07:19:10.650120] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:13.521 [2024-11-20 07:19:10.650271] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:13.521 [2024-11-20 07:19:10.650588] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.521 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:13.521 "name": "raid_bdev1", 00:22:13.521 "uuid": "2ba4cf50-9e5f-4048-b62f-ac2a07f6682b", 00:22:13.521 "strip_size_kb": 0, 00:22:13.521 "state": "online", 00:22:13.521 "raid_level": "raid1", 00:22:13.521 "superblock": true, 00:22:13.521 "num_base_bdevs": 2, 00:22:13.521 "num_base_bdevs_discovered": 2, 00:22:13.522 "num_base_bdevs_operational": 2, 00:22:13.522 "base_bdevs_list": [ 00:22:13.522 { 00:22:13.522 "name": "pt1", 00:22:13.522 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:13.522 "is_configured": true, 00:22:13.522 "data_offset": 256, 00:22:13.522 "data_size": 7936 00:22:13.522 }, 00:22:13.522 { 00:22:13.522 "name": "pt2", 00:22:13.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:13.522 "is_configured": true, 00:22:13.522 "data_offset": 256, 00:22:13.522 "data_size": 7936 00:22:13.522 } 00:22:13.522 ] 00:22:13.522 }' 00:22:13.522 07:19:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:13.522 07:19:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.089 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:14.089 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:14.090 [2024-11-20 07:19:11.179095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:14.090 "name": "raid_bdev1", 00:22:14.090 "aliases": [ 00:22:14.090 "2ba4cf50-9e5f-4048-b62f-ac2a07f6682b" 00:22:14.090 ], 00:22:14.090 "product_name": "Raid Volume", 00:22:14.090 "block_size": 4096, 00:22:14.090 "num_blocks": 7936, 00:22:14.090 "uuid": "2ba4cf50-9e5f-4048-b62f-ac2a07f6682b", 00:22:14.090 "md_size": 32, 00:22:14.090 "md_interleave": false, 00:22:14.090 "dif_type": 0, 00:22:14.090 "assigned_rate_limits": { 00:22:14.090 "rw_ios_per_sec": 0, 00:22:14.090 "rw_mbytes_per_sec": 0, 00:22:14.090 "r_mbytes_per_sec": 0, 00:22:14.090 "w_mbytes_per_sec": 0 00:22:14.090 }, 00:22:14.090 "claimed": false, 00:22:14.090 "zoned": false, 00:22:14.090 "supported_io_types": { 00:22:14.090 "read": true, 00:22:14.090 "write": true, 00:22:14.090 "unmap": false, 00:22:14.090 "flush": false, 00:22:14.090 "reset": true, 00:22:14.090 "nvme_admin": false, 00:22:14.090 "nvme_io": false, 00:22:14.090 "nvme_io_md": false, 00:22:14.090 "write_zeroes": true, 00:22:14.090 "zcopy": false, 00:22:14.090 "get_zone_info": false, 00:22:14.090 "zone_management": false, 00:22:14.090 "zone_append": false, 00:22:14.090 "compare": false, 00:22:14.090 "compare_and_write": false, 00:22:14.090 "abort": false, 00:22:14.090 "seek_hole": false, 00:22:14.090 "seek_data": false, 00:22:14.090 "copy": false, 00:22:14.090 "nvme_iov_md": false 00:22:14.090 }, 00:22:14.090 "memory_domains": [ 00:22:14.090 { 00:22:14.090 "dma_device_id": "system", 00:22:14.090 "dma_device_type": 1 00:22:14.090 }, 00:22:14.090 { 00:22:14.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.090 "dma_device_type": 2 00:22:14.090 }, 00:22:14.090 { 00:22:14.090 "dma_device_id": "system", 00:22:14.090 "dma_device_type": 1 00:22:14.090 }, 00:22:14.090 { 00:22:14.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.090 "dma_device_type": 2 00:22:14.090 } 00:22:14.090 ], 00:22:14.090 "driver_specific": { 00:22:14.090 "raid": { 00:22:14.090 "uuid": "2ba4cf50-9e5f-4048-b62f-ac2a07f6682b", 00:22:14.090 "strip_size_kb": 0, 00:22:14.090 "state": "online", 00:22:14.090 "raid_level": "raid1", 00:22:14.090 "superblock": true, 00:22:14.090 "num_base_bdevs": 2, 00:22:14.090 "num_base_bdevs_discovered": 2, 00:22:14.090 "num_base_bdevs_operational": 2, 00:22:14.090 "base_bdevs_list": [ 00:22:14.090 { 00:22:14.090 "name": "pt1", 00:22:14.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:14.090 "is_configured": true, 00:22:14.090 "data_offset": 256, 00:22:14.090 "data_size": 7936 00:22:14.090 }, 00:22:14.090 { 00:22:14.090 "name": "pt2", 00:22:14.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:14.090 "is_configured": true, 00:22:14.090 "data_offset": 256, 00:22:14.090 "data_size": 7936 00:22:14.090 } 00:22:14.090 ] 00:22:14.090 } 00:22:14.090 } 00:22:14.090 }' 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:14.090 pt2' 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.090 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.349 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:14.349 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:14.349 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:14.349 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.349 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:14.349 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.349 [2024-11-20 07:19:11.447129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.349 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.349 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2ba4cf50-9e5f-4048-b62f-ac2a07f6682b 00:22:14.349 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 2ba4cf50-9e5f-4048-b62f-ac2a07f6682b ']' 00:22:14.349 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:14.349 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.349 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.349 [2024-11-20 07:19:11.494773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:14.349 [2024-11-20 07:19:11.494943] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:14.349 [2024-11-20 07:19:11.495086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:14.350 [2024-11-20 07:19:11.495168] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:14.350 [2024-11-20 07:19:11.495189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.350 [2024-11-20 07:19:11.650889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:14.350 [2024-11-20 07:19:11.653371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:14.350 [2024-11-20 07:19:11.653481] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:14.350 [2024-11-20 07:19:11.653565] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:14.350 [2024-11-20 07:19:11.653593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:14.350 [2024-11-20 07:19:11.653609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:14.350 request: 00:22:14.350 { 00:22:14.350 "name": "raid_bdev1", 00:22:14.350 "raid_level": "raid1", 00:22:14.350 "base_bdevs": [ 00:22:14.350 "malloc1", 00:22:14.350 "malloc2" 00:22:14.350 ], 00:22:14.350 "superblock": false, 00:22:14.350 "method": "bdev_raid_create", 00:22:14.350 "req_id": 1 00:22:14.350 } 00:22:14.350 Got JSON-RPC error response 00:22:14.350 response: 00:22:14.350 { 00:22:14.350 "code": -17, 00:22:14.350 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:14.350 } 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:14.350 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.609 [2024-11-20 07:19:11.714844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:14.609 [2024-11-20 07:19:11.715058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.609 [2024-11-20 07:19:11.715128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:14.609 [2024-11-20 07:19:11.715354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.609 [2024-11-20 07:19:11.718060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.609 [2024-11-20 07:19:11.718236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:14.609 [2024-11-20 07:19:11.718407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:14.609 [2024-11-20 07:19:11.718526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:14.609 pt1 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.609 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:14.609 "name": "raid_bdev1", 00:22:14.609 "uuid": "2ba4cf50-9e5f-4048-b62f-ac2a07f6682b", 00:22:14.609 "strip_size_kb": 0, 00:22:14.609 "state": "configuring", 00:22:14.609 "raid_level": "raid1", 00:22:14.609 "superblock": true, 00:22:14.609 "num_base_bdevs": 2, 00:22:14.609 "num_base_bdevs_discovered": 1, 00:22:14.609 "num_base_bdevs_operational": 2, 00:22:14.609 "base_bdevs_list": [ 00:22:14.609 { 00:22:14.609 "name": "pt1", 00:22:14.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:14.609 "is_configured": true, 00:22:14.609 "data_offset": 256, 00:22:14.609 "data_size": 7936 00:22:14.609 }, 00:22:14.609 { 00:22:14.609 "name": null, 00:22:14.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:14.609 "is_configured": false, 00:22:14.609 "data_offset": 256, 00:22:14.609 "data_size": 7936 00:22:14.609 } 00:22:14.609 ] 00:22:14.610 }' 00:22:14.610 07:19:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:14.610 07:19:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.176 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:15.176 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:15.176 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:15.176 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:15.176 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.176 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.176 [2024-11-20 07:19:12.246988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:15.176 [2024-11-20 07:19:12.247097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:15.177 [2024-11-20 07:19:12.247131] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:15.177 [2024-11-20 07:19:12.247150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:15.177 [2024-11-20 07:19:12.247442] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:15.177 [2024-11-20 07:19:12.247480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:15.177 [2024-11-20 07:19:12.247558] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:15.177 [2024-11-20 07:19:12.247595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:15.177 [2024-11-20 07:19:12.247742] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:15.177 [2024-11-20 07:19:12.247763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:15.177 [2024-11-20 07:19:12.247852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:15.177 [2024-11-20 07:19:12.248032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:15.177 [2024-11-20 07:19:12.248049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:15.177 [2024-11-20 07:19:12.248173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.177 pt2 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:15.177 "name": "raid_bdev1", 00:22:15.177 "uuid": "2ba4cf50-9e5f-4048-b62f-ac2a07f6682b", 00:22:15.177 "strip_size_kb": 0, 00:22:15.177 "state": "online", 00:22:15.177 "raid_level": "raid1", 00:22:15.177 "superblock": true, 00:22:15.177 "num_base_bdevs": 2, 00:22:15.177 "num_base_bdevs_discovered": 2, 00:22:15.177 "num_base_bdevs_operational": 2, 00:22:15.177 "base_bdevs_list": [ 00:22:15.177 { 00:22:15.177 "name": "pt1", 00:22:15.177 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:15.177 "is_configured": true, 00:22:15.177 "data_offset": 256, 00:22:15.177 "data_size": 7936 00:22:15.177 }, 00:22:15.177 { 00:22:15.177 "name": "pt2", 00:22:15.177 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.177 "is_configured": true, 00:22:15.177 "data_offset": 256, 00:22:15.177 "data_size": 7936 00:22:15.177 } 00:22:15.177 ] 00:22:15.177 }' 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:15.177 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:15.747 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:15.747 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:15.747 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:15.747 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:22:15.747 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:15.747 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:15.747 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.747 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.747 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:15.747 [2024-11-20 07:19:12.771512] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.747 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.747 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:15.747 "name": "raid_bdev1", 00:22:15.747 "aliases": [ 00:22:15.747 "2ba4cf50-9e5f-4048-b62f-ac2a07f6682b" 00:22:15.747 ], 00:22:15.747 "product_name": "Raid Volume", 00:22:15.747 "block_size": 4096, 00:22:15.747 "num_blocks": 7936, 00:22:15.747 "uuid": "2ba4cf50-9e5f-4048-b62f-ac2a07f6682b", 00:22:15.747 "md_size": 32, 00:22:15.747 "md_interleave": false, 00:22:15.747 "dif_type": 0, 00:22:15.747 "assigned_rate_limits": { 00:22:15.747 "rw_ios_per_sec": 0, 00:22:15.747 "rw_mbytes_per_sec": 0, 00:22:15.747 "r_mbytes_per_sec": 0, 00:22:15.747 "w_mbytes_per_sec": 0 00:22:15.747 }, 00:22:15.747 "claimed": false, 00:22:15.747 "zoned": false, 00:22:15.747 "supported_io_types": { 00:22:15.747 "read": true, 00:22:15.747 "write": true, 00:22:15.747 "unmap": false, 00:22:15.747 "flush": false, 00:22:15.747 "reset": true, 00:22:15.747 "nvme_admin": false, 00:22:15.747 "nvme_io": false, 00:22:15.747 "nvme_io_md": false, 00:22:15.747 "write_zeroes": true, 00:22:15.747 "zcopy": false, 00:22:15.747 "get_zone_info": false, 00:22:15.747 "zone_management": false, 00:22:15.747 "zone_append": false, 00:22:15.747 "compare": false, 00:22:15.747 "compare_and_write": false, 00:22:15.747 "abort": false, 00:22:15.747 "seek_hole": false, 00:22:15.747 "seek_data": false, 00:22:15.747 "copy": false, 00:22:15.747 "nvme_iov_md": false 00:22:15.747 }, 00:22:15.747 "memory_domains": [ 00:22:15.747 { 00:22:15.747 "dma_device_id": "system", 00:22:15.747 "dma_device_type": 1 00:22:15.747 }, 00:22:15.747 { 00:22:15.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.747 "dma_device_type": 2 00:22:15.747 }, 00:22:15.747 { 00:22:15.747 "dma_device_id": "system", 00:22:15.747 "dma_device_type": 1 00:22:15.747 }, 00:22:15.747 { 00:22:15.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.747 "dma_device_type": 2 00:22:15.747 } 00:22:15.747 ], 00:22:15.747 "driver_specific": { 00:22:15.747 "raid": { 00:22:15.747 "uuid": "2ba4cf50-9e5f-4048-b62f-ac2a07f6682b", 00:22:15.747 "strip_size_kb": 0, 00:22:15.747 "state": "online", 00:22:15.747 "raid_level": "raid1", 00:22:15.747 "superblock": true, 00:22:15.747 "num_base_bdevs": 2, 00:22:15.747 "num_base_bdevs_discovered": 2, 00:22:15.747 "num_base_bdevs_operational": 2, 00:22:15.747 "base_bdevs_list": [ 00:22:15.748 { 00:22:15.748 "name": "pt1", 00:22:15.748 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:15.748 "is_configured": true, 00:22:15.748 "data_offset": 256, 00:22:15.748 "data_size": 7936 00:22:15.748 }, 00:22:15.748 { 00:22:15.748 "name": "pt2", 00:22:15.748 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:15.748 "is_configured": true, 00:22:15.748 "data_offset": 256, 00:22:15.748 "data_size": 7936 00:22:15.748 } 00:22:15.748 ] 00:22:15.748 } 00:22:15.748 } 00:22:15.748 }' 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:15.748 pt2' 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.748 07:19:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.748 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:22:15.748 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:22:15.748 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:15.748 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.748 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:15.748 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:15.748 [2024-11-20 07:19:13.031586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.748 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 2ba4cf50-9e5f-4048-b62f-ac2a07f6682b '!=' 2ba4cf50-9e5f-4048-b62f-ac2a07f6682b ']' 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.006 [2024-11-20 07:19:13.079305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.006 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.007 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.007 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.007 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.007 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.007 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.007 "name": "raid_bdev1", 00:22:16.007 "uuid": "2ba4cf50-9e5f-4048-b62f-ac2a07f6682b", 00:22:16.007 "strip_size_kb": 0, 00:22:16.007 "state": "online", 00:22:16.007 "raid_level": "raid1", 00:22:16.007 "superblock": true, 00:22:16.007 "num_base_bdevs": 2, 00:22:16.007 "num_base_bdevs_discovered": 1, 00:22:16.007 "num_base_bdevs_operational": 1, 00:22:16.007 "base_bdevs_list": [ 00:22:16.007 { 00:22:16.007 "name": null, 00:22:16.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.007 "is_configured": false, 00:22:16.007 "data_offset": 0, 00:22:16.007 "data_size": 7936 00:22:16.007 }, 00:22:16.007 { 00:22:16.007 "name": "pt2", 00:22:16.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:16.007 "is_configured": true, 00:22:16.007 "data_offset": 256, 00:22:16.007 "data_size": 7936 00:22:16.007 } 00:22:16.007 ] 00:22:16.007 }' 00:22:16.007 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.007 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.574 [2024-11-20 07:19:13.619372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:16.574 [2024-11-20 07:19:13.619405] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:16.574 [2024-11-20 07:19:13.619500] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.574 [2024-11-20 07:19:13.619566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.574 [2024-11-20 07:19:13.619593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.574 [2024-11-20 07:19:13.711376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:16.574 [2024-11-20 07:19:13.711452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:16.574 [2024-11-20 07:19:13.711480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:16.574 [2024-11-20 07:19:13.711507] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:16.574 [2024-11-20 07:19:13.714139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:16.574 [2024-11-20 07:19:13.714189] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:16.574 [2024-11-20 07:19:13.714256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:16.574 [2024-11-20 07:19:13.714319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:16.574 [2024-11-20 07:19:13.714437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:16.574 [2024-11-20 07:19:13.714460] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:16.574 [2024-11-20 07:19:13.714550] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:16.574 [2024-11-20 07:19:13.714694] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:16.574 [2024-11-20 07:19:13.714709] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:16.574 [2024-11-20 07:19:13.714828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:16.574 pt2 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:16.574 "name": "raid_bdev1", 00:22:16.574 "uuid": "2ba4cf50-9e5f-4048-b62f-ac2a07f6682b", 00:22:16.574 "strip_size_kb": 0, 00:22:16.574 "state": "online", 00:22:16.574 "raid_level": "raid1", 00:22:16.574 "superblock": true, 00:22:16.574 "num_base_bdevs": 2, 00:22:16.574 "num_base_bdevs_discovered": 1, 00:22:16.574 "num_base_bdevs_operational": 1, 00:22:16.574 "base_bdevs_list": [ 00:22:16.574 { 00:22:16.574 "name": null, 00:22:16.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:16.574 "is_configured": false, 00:22:16.574 "data_offset": 256, 00:22:16.574 "data_size": 7936 00:22:16.574 }, 00:22:16.574 { 00:22:16.574 "name": "pt2", 00:22:16.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:16.574 "is_configured": true, 00:22:16.574 "data_offset": 256, 00:22:16.574 "data_size": 7936 00:22:16.574 } 00:22:16.574 ] 00:22:16.574 }' 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:16.574 07:19:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.142 [2024-11-20 07:19:14.199525] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.142 [2024-11-20 07:19:14.199708] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:17.142 [2024-11-20 07:19:14.199820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.142 [2024-11-20 07:19:14.199921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.142 [2024-11-20 07:19:14.199940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.142 [2024-11-20 07:19:14.267602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:17.142 [2024-11-20 07:19:14.267681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:17.142 [2024-11-20 07:19:14.267713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:17.142 [2024-11-20 07:19:14.267728] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:17.142 [2024-11-20 07:19:14.270370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:17.142 [2024-11-20 07:19:14.270547] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:17.142 [2024-11-20 07:19:14.270644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:17.142 [2024-11-20 07:19:14.270707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:17.142 [2024-11-20 07:19:14.270907] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:17.142 [2024-11-20 07:19:14.270926] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.142 [2024-11-20 07:19:14.270953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:17.142 [2024-11-20 07:19:14.271032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:17.142 [2024-11-20 07:19:14.271134] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:17.142 [2024-11-20 07:19:14.271150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:17.142 [2024-11-20 07:19:14.271244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:17.142 [2024-11-20 07:19:14.271382] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:17.142 [2024-11-20 07:19:14.271402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:17.142 [2024-11-20 07:19:14.271594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:17.142 pt1 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:17.142 "name": "raid_bdev1", 00:22:17.142 "uuid": "2ba4cf50-9e5f-4048-b62f-ac2a07f6682b", 00:22:17.142 "strip_size_kb": 0, 00:22:17.142 "state": "online", 00:22:17.142 "raid_level": "raid1", 00:22:17.142 "superblock": true, 00:22:17.142 "num_base_bdevs": 2, 00:22:17.142 "num_base_bdevs_discovered": 1, 00:22:17.142 "num_base_bdevs_operational": 1, 00:22:17.142 "base_bdevs_list": [ 00:22:17.142 { 00:22:17.142 "name": null, 00:22:17.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:17.142 "is_configured": false, 00:22:17.142 "data_offset": 256, 00:22:17.142 "data_size": 7936 00:22:17.142 }, 00:22:17.142 { 00:22:17.142 "name": "pt2", 00:22:17.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:17.142 "is_configured": true, 00:22:17.142 "data_offset": 256, 00:22:17.142 "data_size": 7936 00:22:17.142 } 00:22:17.142 ] 00:22:17.142 }' 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:17.142 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:17.709 [2024-11-20 07:19:14.864036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 2ba4cf50-9e5f-4048-b62f-ac2a07f6682b '!=' 2ba4cf50-9e5f-4048-b62f-ac2a07f6682b ']' 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87801 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87801 ']' 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87801 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87801 00:22:17.709 killing process with pid 87801 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87801' 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87801 00:22:17.709 [2024-11-20 07:19:14.941319] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:17.709 07:19:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87801 00:22:17.709 [2024-11-20 07:19:14.941430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.709 [2024-11-20 07:19:14.941495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.709 [2024-11-20 07:19:14.941522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:17.967 [2024-11-20 07:19:15.140012] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:18.903 ************************************ 00:22:18.903 END TEST raid_superblock_test_md_separate 00:22:18.903 ************************************ 00:22:18.903 07:19:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:22:18.903 00:22:18.903 real 0m6.795s 00:22:18.903 user 0m10.784s 00:22:18.903 sys 0m0.982s 00:22:18.903 07:19:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:18.903 07:19:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:18.903 07:19:16 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:22:18.903 07:19:16 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:22:18.903 07:19:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:18.903 07:19:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:18.903 07:19:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:19.162 ************************************ 00:22:19.162 START TEST raid_rebuild_test_sb_md_separate 00:22:19.162 ************************************ 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:19.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88135 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88135 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88135 ']' 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:19.162 07:19:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:19.162 [2024-11-20 07:19:16.324594] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:22:19.162 [2024-11-20 07:19:16.324979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88135 ] 00:22:19.162 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:19.162 Zero copy mechanism will not be used. 00:22:19.421 [2024-11-20 07:19:16.506807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.421 [2024-11-20 07:19:16.661823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.679 [2024-11-20 07:19:16.882389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:19.679 [2024-11-20 07:19:16.882626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.247 BaseBdev1_malloc 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.247 [2024-11-20 07:19:17.450653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:20.247 [2024-11-20 07:19:17.450738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.247 [2024-11-20 07:19:17.450769] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:20.247 [2024-11-20 07:19:17.450786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.247 [2024-11-20 07:19:17.453435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.247 [2024-11-20 07:19:17.453481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:20.247 BaseBdev1 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.247 BaseBdev2_malloc 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.247 [2024-11-20 07:19:17.506623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:20.247 [2024-11-20 07:19:17.506709] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.247 [2024-11-20 07:19:17.506737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:20.247 [2024-11-20 07:19:17.506756] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.247 [2024-11-20 07:19:17.509374] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.247 [2024-11-20 07:19:17.509436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:20.247 BaseBdev2 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.247 spare_malloc 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.247 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.507 spare_delay 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.507 [2024-11-20 07:19:17.577602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:20.507 [2024-11-20 07:19:17.577691] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.507 [2024-11-20 07:19:17.577721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:20.507 [2024-11-20 07:19:17.577738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.507 [2024-11-20 07:19:17.580375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.507 [2024-11-20 07:19:17.580426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:20.507 spare 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.507 [2024-11-20 07:19:17.585638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:20.507 [2024-11-20 07:19:17.588091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:20.507 [2024-11-20 07:19:17.588342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:20.507 [2024-11-20 07:19:17.588366] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:20.507 [2024-11-20 07:19:17.588462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:20.507 [2024-11-20 07:19:17.588665] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:20.507 [2024-11-20 07:19:17.588679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:20.507 [2024-11-20 07:19:17.588801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:20.507 "name": "raid_bdev1", 00:22:20.507 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:20.507 "strip_size_kb": 0, 00:22:20.507 "state": "online", 00:22:20.507 "raid_level": "raid1", 00:22:20.507 "superblock": true, 00:22:20.507 "num_base_bdevs": 2, 00:22:20.507 "num_base_bdevs_discovered": 2, 00:22:20.507 "num_base_bdevs_operational": 2, 00:22:20.507 "base_bdevs_list": [ 00:22:20.507 { 00:22:20.507 "name": "BaseBdev1", 00:22:20.507 "uuid": "25a973e9-8af3-57f2-916f-c16f768f8764", 00:22:20.507 "is_configured": true, 00:22:20.507 "data_offset": 256, 00:22:20.507 "data_size": 7936 00:22:20.507 }, 00:22:20.507 { 00:22:20.507 "name": "BaseBdev2", 00:22:20.507 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:20.507 "is_configured": true, 00:22:20.507 "data_offset": 256, 00:22:20.507 "data_size": 7936 00:22:20.507 } 00:22:20.507 ] 00:22:20.507 }' 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:20.507 07:19:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:20.767 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:20.767 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:21.027 [2024-11-20 07:19:18.094164] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:21.027 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:21.286 [2024-11-20 07:19:18.470014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:21.286 /dev/nbd0 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:21.286 1+0 records in 00:22:21.286 1+0 records out 00:22:21.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529337 s, 7.7 MB/s 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:22:21.286 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:22:21.287 07:19:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:22:22.223 7936+0 records in 00:22:22.223 7936+0 records out 00:22:22.223 32505856 bytes (33 MB, 31 MiB) copied, 0.969901 s, 33.5 MB/s 00:22:22.223 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:22:22.223 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:22.224 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:22.224 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:22.224 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:22.224 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:22.224 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:22.482 [2024-11-20 07:19:19.798266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:22.742 [2024-11-20 07:19:19.834396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:22.742 "name": "raid_bdev1", 00:22:22.742 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:22.742 "strip_size_kb": 0, 00:22:22.742 "state": "online", 00:22:22.742 "raid_level": "raid1", 00:22:22.742 "superblock": true, 00:22:22.742 "num_base_bdevs": 2, 00:22:22.742 "num_base_bdevs_discovered": 1, 00:22:22.742 "num_base_bdevs_operational": 1, 00:22:22.742 "base_bdevs_list": [ 00:22:22.742 { 00:22:22.742 "name": null, 00:22:22.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.742 "is_configured": false, 00:22:22.742 "data_offset": 0, 00:22:22.742 "data_size": 7936 00:22:22.742 }, 00:22:22.742 { 00:22:22.742 "name": "BaseBdev2", 00:22:22.742 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:22.742 "is_configured": true, 00:22:22.742 "data_offset": 256, 00:22:22.742 "data_size": 7936 00:22:22.742 } 00:22:22.742 ] 00:22:22.742 }' 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:22.742 07:19:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:23.312 07:19:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:23.312 07:19:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.312 07:19:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:23.312 [2024-11-20 07:19:20.342561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:23.312 [2024-11-20 07:19:20.356521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:22:23.312 07:19:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.312 07:19:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:23.312 [2024-11-20 07:19:20.359124] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:24.248 "name": "raid_bdev1", 00:22:24.248 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:24.248 "strip_size_kb": 0, 00:22:24.248 "state": "online", 00:22:24.248 "raid_level": "raid1", 00:22:24.248 "superblock": true, 00:22:24.248 "num_base_bdevs": 2, 00:22:24.248 "num_base_bdevs_discovered": 2, 00:22:24.248 "num_base_bdevs_operational": 2, 00:22:24.248 "process": { 00:22:24.248 "type": "rebuild", 00:22:24.248 "target": "spare", 00:22:24.248 "progress": { 00:22:24.248 "blocks": 2560, 00:22:24.248 "percent": 32 00:22:24.248 } 00:22:24.248 }, 00:22:24.248 "base_bdevs_list": [ 00:22:24.248 { 00:22:24.248 "name": "spare", 00:22:24.248 "uuid": "514d5f93-1b12-52a7-b0ee-0341c45f59bb", 00:22:24.248 "is_configured": true, 00:22:24.248 "data_offset": 256, 00:22:24.248 "data_size": 7936 00:22:24.248 }, 00:22:24.248 { 00:22:24.248 "name": "BaseBdev2", 00:22:24.248 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:24.248 "is_configured": true, 00:22:24.248 "data_offset": 256, 00:22:24.248 "data_size": 7936 00:22:24.248 } 00:22:24.248 ] 00:22:24.248 }' 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.248 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.248 [2024-11-20 07:19:21.520885] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:24.508 [2024-11-20 07:19:21.568225] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:24.508 [2024-11-20 07:19:21.568336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:24.508 [2024-11-20 07:19:21.568361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:24.508 [2024-11-20 07:19:21.568375] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:24.508 "name": "raid_bdev1", 00:22:24.508 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:24.508 "strip_size_kb": 0, 00:22:24.508 "state": "online", 00:22:24.508 "raid_level": "raid1", 00:22:24.508 "superblock": true, 00:22:24.508 "num_base_bdevs": 2, 00:22:24.508 "num_base_bdevs_discovered": 1, 00:22:24.508 "num_base_bdevs_operational": 1, 00:22:24.508 "base_bdevs_list": [ 00:22:24.508 { 00:22:24.508 "name": null, 00:22:24.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.508 "is_configured": false, 00:22:24.508 "data_offset": 0, 00:22:24.508 "data_size": 7936 00:22:24.508 }, 00:22:24.508 { 00:22:24.508 "name": "BaseBdev2", 00:22:24.508 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:24.508 "is_configured": true, 00:22:24.508 "data_offset": 256, 00:22:24.508 "data_size": 7936 00:22:24.508 } 00:22:24.508 ] 00:22:24.508 }' 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:24.508 07:19:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:25.078 "name": "raid_bdev1", 00:22:25.078 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:25.078 "strip_size_kb": 0, 00:22:25.078 "state": "online", 00:22:25.078 "raid_level": "raid1", 00:22:25.078 "superblock": true, 00:22:25.078 "num_base_bdevs": 2, 00:22:25.078 "num_base_bdevs_discovered": 1, 00:22:25.078 "num_base_bdevs_operational": 1, 00:22:25.078 "base_bdevs_list": [ 00:22:25.078 { 00:22:25.078 "name": null, 00:22:25.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.078 "is_configured": false, 00:22:25.078 "data_offset": 0, 00:22:25.078 "data_size": 7936 00:22:25.078 }, 00:22:25.078 { 00:22:25.078 "name": "BaseBdev2", 00:22:25.078 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:25.078 "is_configured": true, 00:22:25.078 "data_offset": 256, 00:22:25.078 "data_size": 7936 00:22:25.078 } 00:22:25.078 ] 00:22:25.078 }' 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:25.078 [2024-11-20 07:19:22.306907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:25.078 [2024-11-20 07:19:22.319936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.078 07:19:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:25.078 [2024-11-20 07:19:22.322408] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:26.014 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.014 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:26.014 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:26.014 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:26.014 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:26.014 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.014 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.014 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.014 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.273 "name": "raid_bdev1", 00:22:26.273 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:26.273 "strip_size_kb": 0, 00:22:26.273 "state": "online", 00:22:26.273 "raid_level": "raid1", 00:22:26.273 "superblock": true, 00:22:26.273 "num_base_bdevs": 2, 00:22:26.273 "num_base_bdevs_discovered": 2, 00:22:26.273 "num_base_bdevs_operational": 2, 00:22:26.273 "process": { 00:22:26.273 "type": "rebuild", 00:22:26.273 "target": "spare", 00:22:26.273 "progress": { 00:22:26.273 "blocks": 2560, 00:22:26.273 "percent": 32 00:22:26.273 } 00:22:26.273 }, 00:22:26.273 "base_bdevs_list": [ 00:22:26.273 { 00:22:26.273 "name": "spare", 00:22:26.273 "uuid": "514d5f93-1b12-52a7-b0ee-0341c45f59bb", 00:22:26.273 "is_configured": true, 00:22:26.273 "data_offset": 256, 00:22:26.273 "data_size": 7936 00:22:26.273 }, 00:22:26.273 { 00:22:26.273 "name": "BaseBdev2", 00:22:26.273 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:26.273 "is_configured": true, 00:22:26.273 "data_offset": 256, 00:22:26.273 "data_size": 7936 00:22:26.273 } 00:22:26.273 ] 00:22:26.273 }' 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:26.273 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=768 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.273 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:26.273 "name": "raid_bdev1", 00:22:26.273 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:26.273 "strip_size_kb": 0, 00:22:26.273 "state": "online", 00:22:26.273 "raid_level": "raid1", 00:22:26.273 "superblock": true, 00:22:26.273 "num_base_bdevs": 2, 00:22:26.273 "num_base_bdevs_discovered": 2, 00:22:26.273 "num_base_bdevs_operational": 2, 00:22:26.273 "process": { 00:22:26.273 "type": "rebuild", 00:22:26.273 "target": "spare", 00:22:26.273 "progress": { 00:22:26.274 "blocks": 2816, 00:22:26.274 "percent": 35 00:22:26.274 } 00:22:26.274 }, 00:22:26.274 "base_bdevs_list": [ 00:22:26.274 { 00:22:26.274 "name": "spare", 00:22:26.274 "uuid": "514d5f93-1b12-52a7-b0ee-0341c45f59bb", 00:22:26.274 "is_configured": true, 00:22:26.274 "data_offset": 256, 00:22:26.274 "data_size": 7936 00:22:26.274 }, 00:22:26.274 { 00:22:26.274 "name": "BaseBdev2", 00:22:26.274 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:26.274 "is_configured": true, 00:22:26.274 "data_offset": 256, 00:22:26.274 "data_size": 7936 00:22:26.274 } 00:22:26.274 ] 00:22:26.274 }' 00:22:26.274 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:26.274 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.274 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:26.554 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.554 07:19:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:27.490 "name": "raid_bdev1", 00:22:27.490 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:27.490 "strip_size_kb": 0, 00:22:27.490 "state": "online", 00:22:27.490 "raid_level": "raid1", 00:22:27.490 "superblock": true, 00:22:27.490 "num_base_bdevs": 2, 00:22:27.490 "num_base_bdevs_discovered": 2, 00:22:27.490 "num_base_bdevs_operational": 2, 00:22:27.490 "process": { 00:22:27.490 "type": "rebuild", 00:22:27.490 "target": "spare", 00:22:27.490 "progress": { 00:22:27.490 "blocks": 5888, 00:22:27.490 "percent": 74 00:22:27.490 } 00:22:27.490 }, 00:22:27.490 "base_bdevs_list": [ 00:22:27.490 { 00:22:27.490 "name": "spare", 00:22:27.490 "uuid": "514d5f93-1b12-52a7-b0ee-0341c45f59bb", 00:22:27.490 "is_configured": true, 00:22:27.490 "data_offset": 256, 00:22:27.490 "data_size": 7936 00:22:27.490 }, 00:22:27.490 { 00:22:27.490 "name": "BaseBdev2", 00:22:27.490 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:27.490 "is_configured": true, 00:22:27.490 "data_offset": 256, 00:22:27.490 "data_size": 7936 00:22:27.490 } 00:22:27.490 ] 00:22:27.490 }' 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.490 07:19:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:22:28.427 [2024-11-20 07:19:25.445333] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:28.427 [2024-11-20 07:19:25.445656] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:28.427 [2024-11-20 07:19:25.445824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.687 "name": "raid_bdev1", 00:22:28.687 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:28.687 "strip_size_kb": 0, 00:22:28.687 "state": "online", 00:22:28.687 "raid_level": "raid1", 00:22:28.687 "superblock": true, 00:22:28.687 "num_base_bdevs": 2, 00:22:28.687 "num_base_bdevs_discovered": 2, 00:22:28.687 "num_base_bdevs_operational": 2, 00:22:28.687 "base_bdevs_list": [ 00:22:28.687 { 00:22:28.687 "name": "spare", 00:22:28.687 "uuid": "514d5f93-1b12-52a7-b0ee-0341c45f59bb", 00:22:28.687 "is_configured": true, 00:22:28.687 "data_offset": 256, 00:22:28.687 "data_size": 7936 00:22:28.687 }, 00:22:28.687 { 00:22:28.687 "name": "BaseBdev2", 00:22:28.687 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:28.687 "is_configured": true, 00:22:28.687 "data_offset": 256, 00:22:28.687 "data_size": 7936 00:22:28.687 } 00:22:28.687 ] 00:22:28.687 }' 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.687 07:19:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:28.951 "name": "raid_bdev1", 00:22:28.951 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:28.951 "strip_size_kb": 0, 00:22:28.951 "state": "online", 00:22:28.951 "raid_level": "raid1", 00:22:28.951 "superblock": true, 00:22:28.951 "num_base_bdevs": 2, 00:22:28.951 "num_base_bdevs_discovered": 2, 00:22:28.951 "num_base_bdevs_operational": 2, 00:22:28.951 "base_bdevs_list": [ 00:22:28.951 { 00:22:28.951 "name": "spare", 00:22:28.951 "uuid": "514d5f93-1b12-52a7-b0ee-0341c45f59bb", 00:22:28.951 "is_configured": true, 00:22:28.951 "data_offset": 256, 00:22:28.951 "data_size": 7936 00:22:28.951 }, 00:22:28.951 { 00:22:28.951 "name": "BaseBdev2", 00:22:28.951 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:28.951 "is_configured": true, 00:22:28.951 "data_offset": 256, 00:22:28.951 "data_size": 7936 00:22:28.951 } 00:22:28.951 ] 00:22:28.951 }' 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:28.951 "name": "raid_bdev1", 00:22:28.951 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:28.951 "strip_size_kb": 0, 00:22:28.951 "state": "online", 00:22:28.951 "raid_level": "raid1", 00:22:28.951 "superblock": true, 00:22:28.951 "num_base_bdevs": 2, 00:22:28.951 "num_base_bdevs_discovered": 2, 00:22:28.951 "num_base_bdevs_operational": 2, 00:22:28.951 "base_bdevs_list": [ 00:22:28.951 { 00:22:28.951 "name": "spare", 00:22:28.951 "uuid": "514d5f93-1b12-52a7-b0ee-0341c45f59bb", 00:22:28.951 "is_configured": true, 00:22:28.951 "data_offset": 256, 00:22:28.951 "data_size": 7936 00:22:28.951 }, 00:22:28.951 { 00:22:28.951 "name": "BaseBdev2", 00:22:28.951 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:28.951 "is_configured": true, 00:22:28.951 "data_offset": 256, 00:22:28.951 "data_size": 7936 00:22:28.951 } 00:22:28.951 ] 00:22:28.951 }' 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:28.951 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.519 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:29.519 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.519 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.519 [2024-11-20 07:19:26.664498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:29.519 [2024-11-20 07:19:26.664539] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:29.519 [2024-11-20 07:19:26.664650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:29.519 [2024-11-20 07:19:26.664750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:29.519 [2024-11-20 07:19:26.664766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:29.519 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.519 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:22:29.519 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:29.519 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.519 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:29.520 07:19:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:29.778 /dev/nbd0 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:29.778 1+0 records in 00:22:29.778 1+0 records out 00:22:29.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031717 s, 12.9 MB/s 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:29.778 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:22:30.344 /dev/nbd1 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:30.344 1+0 records in 00:22:30.344 1+0 records out 00:22:30.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040757 s, 10.0 MB/s 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:30.344 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:30.345 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:22:30.345 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:22:30.345 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:30.345 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:30.345 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:22:30.345 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:30.345 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:22:30.911 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:30.911 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:30.911 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:30.911 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:30.911 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:30.911 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:30.911 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:30.911 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:30.911 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:30.911 07:19:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:22:31.170 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:31.170 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:31.170 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:31.170 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:31.170 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:31.170 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:31.170 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:22:31.170 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:22:31.170 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:22:31.170 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:22:31.170 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.170 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.170 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.171 [2024-11-20 07:19:28.254497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:31.171 [2024-11-20 07:19:28.254561] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.171 [2024-11-20 07:19:28.254593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:31.171 [2024-11-20 07:19:28.254609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.171 [2024-11-20 07:19:28.257127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.171 [2024-11-20 07:19:28.257171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:31.171 [2024-11-20 07:19:28.257260] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:31.171 [2024-11-20 07:19:28.257331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:31.171 [2024-11-20 07:19:28.257508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:31.171 spare 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.171 [2024-11-20 07:19:28.357632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:22:31.171 [2024-11-20 07:19:28.357688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:22:31.171 [2024-11-20 07:19:28.357842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:22:31.171 [2024-11-20 07:19:28.358078] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:22:31.171 [2024-11-20 07:19:28.358106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:22:31.171 [2024-11-20 07:19:28.358277] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.171 "name": "raid_bdev1", 00:22:31.171 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:31.171 "strip_size_kb": 0, 00:22:31.171 "state": "online", 00:22:31.171 "raid_level": "raid1", 00:22:31.171 "superblock": true, 00:22:31.171 "num_base_bdevs": 2, 00:22:31.171 "num_base_bdevs_discovered": 2, 00:22:31.171 "num_base_bdevs_operational": 2, 00:22:31.171 "base_bdevs_list": [ 00:22:31.171 { 00:22:31.171 "name": "spare", 00:22:31.171 "uuid": "514d5f93-1b12-52a7-b0ee-0341c45f59bb", 00:22:31.171 "is_configured": true, 00:22:31.171 "data_offset": 256, 00:22:31.171 "data_size": 7936 00:22:31.171 }, 00:22:31.171 { 00:22:31.171 "name": "BaseBdev2", 00:22:31.171 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:31.171 "is_configured": true, 00:22:31.171 "data_offset": 256, 00:22:31.171 "data_size": 7936 00:22:31.171 } 00:22:31.171 ] 00:22:31.171 }' 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.171 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:31.738 "name": "raid_bdev1", 00:22:31.738 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:31.738 "strip_size_kb": 0, 00:22:31.738 "state": "online", 00:22:31.738 "raid_level": "raid1", 00:22:31.738 "superblock": true, 00:22:31.738 "num_base_bdevs": 2, 00:22:31.738 "num_base_bdevs_discovered": 2, 00:22:31.738 "num_base_bdevs_operational": 2, 00:22:31.738 "base_bdevs_list": [ 00:22:31.738 { 00:22:31.738 "name": "spare", 00:22:31.738 "uuid": "514d5f93-1b12-52a7-b0ee-0341c45f59bb", 00:22:31.738 "is_configured": true, 00:22:31.738 "data_offset": 256, 00:22:31.738 "data_size": 7936 00:22:31.738 }, 00:22:31.738 { 00:22:31.738 "name": "BaseBdev2", 00:22:31.738 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:31.738 "is_configured": true, 00:22:31.738 "data_offset": 256, 00:22:31.738 "data_size": 7936 00:22:31.738 } 00:22:31.738 ] 00:22:31.738 }' 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:31.738 07:19:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:31.738 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:31.738 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.738 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:31.738 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.738 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.738 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.739 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.739 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:31.739 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.739 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.998 [2024-11-20 07:19:29.058808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:31.998 "name": "raid_bdev1", 00:22:31.998 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:31.998 "strip_size_kb": 0, 00:22:31.998 "state": "online", 00:22:31.998 "raid_level": "raid1", 00:22:31.998 "superblock": true, 00:22:31.998 "num_base_bdevs": 2, 00:22:31.998 "num_base_bdevs_discovered": 1, 00:22:31.998 "num_base_bdevs_operational": 1, 00:22:31.998 "base_bdevs_list": [ 00:22:31.998 { 00:22:31.998 "name": null, 00:22:31.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.998 "is_configured": false, 00:22:31.998 "data_offset": 0, 00:22:31.998 "data_size": 7936 00:22:31.998 }, 00:22:31.998 { 00:22:31.998 "name": "BaseBdev2", 00:22:31.998 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:31.998 "is_configured": true, 00:22:31.998 "data_offset": 256, 00:22:31.998 "data_size": 7936 00:22:31.998 } 00:22:31.998 ] 00:22:31.998 }' 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:31.998 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.256 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:32.256 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.256 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:32.515 [2024-11-20 07:19:29.574996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:32.515 [2024-11-20 07:19:29.575224] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:32.515 [2024-11-20 07:19:29.575251] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:32.515 [2024-11-20 07:19:29.575300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:32.515 [2024-11-20 07:19:29.587956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:22:32.515 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.515 07:19:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:22:32.515 [2024-11-20 07:19:29.590503] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:33.452 "name": "raid_bdev1", 00:22:33.452 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:33.452 "strip_size_kb": 0, 00:22:33.452 "state": "online", 00:22:33.452 "raid_level": "raid1", 00:22:33.452 "superblock": true, 00:22:33.452 "num_base_bdevs": 2, 00:22:33.452 "num_base_bdevs_discovered": 2, 00:22:33.452 "num_base_bdevs_operational": 2, 00:22:33.452 "process": { 00:22:33.452 "type": "rebuild", 00:22:33.452 "target": "spare", 00:22:33.452 "progress": { 00:22:33.452 "blocks": 2560, 00:22:33.452 "percent": 32 00:22:33.452 } 00:22:33.452 }, 00:22:33.452 "base_bdevs_list": [ 00:22:33.452 { 00:22:33.452 "name": "spare", 00:22:33.452 "uuid": "514d5f93-1b12-52a7-b0ee-0341c45f59bb", 00:22:33.452 "is_configured": true, 00:22:33.452 "data_offset": 256, 00:22:33.452 "data_size": 7936 00:22:33.452 }, 00:22:33.452 { 00:22:33.452 "name": "BaseBdev2", 00:22:33.452 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:33.452 "is_configured": true, 00:22:33.452 "data_offset": 256, 00:22:33.452 "data_size": 7936 00:22:33.452 } 00:22:33.452 ] 00:22:33.452 }' 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.452 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.453 [2024-11-20 07:19:30.764437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:33.711 [2024-11-20 07:19:30.800353] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:33.711 [2024-11-20 07:19:30.800468] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.711 [2024-11-20 07:19:30.800493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:33.711 [2024-11-20 07:19:30.800537] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:33.711 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.711 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:33.711 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:33.711 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:33.711 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:33.711 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:33.711 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:33.711 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:33.711 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:33.711 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:33.711 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:33.711 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:33.712 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.712 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.712 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:33.712 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.712 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:33.712 "name": "raid_bdev1", 00:22:33.712 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:33.712 "strip_size_kb": 0, 00:22:33.712 "state": "online", 00:22:33.712 "raid_level": "raid1", 00:22:33.712 "superblock": true, 00:22:33.712 "num_base_bdevs": 2, 00:22:33.712 "num_base_bdevs_discovered": 1, 00:22:33.712 "num_base_bdevs_operational": 1, 00:22:33.712 "base_bdevs_list": [ 00:22:33.712 { 00:22:33.712 "name": null, 00:22:33.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.712 "is_configured": false, 00:22:33.712 "data_offset": 0, 00:22:33.712 "data_size": 7936 00:22:33.712 }, 00:22:33.712 { 00:22:33.712 "name": "BaseBdev2", 00:22:33.712 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:33.712 "is_configured": true, 00:22:33.712 "data_offset": 256, 00:22:33.712 "data_size": 7936 00:22:33.712 } 00:22:33.712 ] 00:22:33.712 }' 00:22:33.712 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:33.712 07:19:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.277 07:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:34.277 07:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.277 07:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:34.277 [2024-11-20 07:19:31.326820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:34.277 [2024-11-20 07:19:31.326921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:34.277 [2024-11-20 07:19:31.326984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:34.277 [2024-11-20 07:19:31.327011] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:34.277 [2024-11-20 07:19:31.327395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:34.277 [2024-11-20 07:19:31.327424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:34.277 [2024-11-20 07:19:31.327520] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:22:34.277 [2024-11-20 07:19:31.327562] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:22:34.277 [2024-11-20 07:19:31.327579] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:22:34.277 [2024-11-20 07:19:31.327613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:34.277 [2024-11-20 07:19:31.340281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:22:34.277 spare 00:22:34.278 07:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.278 07:19:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:22:34.278 [2024-11-20 07:19:31.342791] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:35.213 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:35.213 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:35.213 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:35.213 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:35.213 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:35.213 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.213 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.213 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.213 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.213 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.213 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:35.213 "name": "raid_bdev1", 00:22:35.213 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:35.213 "strip_size_kb": 0, 00:22:35.213 "state": "online", 00:22:35.213 "raid_level": "raid1", 00:22:35.213 "superblock": true, 00:22:35.213 "num_base_bdevs": 2, 00:22:35.213 "num_base_bdevs_discovered": 2, 00:22:35.213 "num_base_bdevs_operational": 2, 00:22:35.213 "process": { 00:22:35.213 "type": "rebuild", 00:22:35.213 "target": "spare", 00:22:35.214 "progress": { 00:22:35.214 "blocks": 2560, 00:22:35.214 "percent": 32 00:22:35.214 } 00:22:35.214 }, 00:22:35.214 "base_bdevs_list": [ 00:22:35.214 { 00:22:35.214 "name": "spare", 00:22:35.214 "uuid": "514d5f93-1b12-52a7-b0ee-0341c45f59bb", 00:22:35.214 "is_configured": true, 00:22:35.214 "data_offset": 256, 00:22:35.214 "data_size": 7936 00:22:35.214 }, 00:22:35.214 { 00:22:35.214 "name": "BaseBdev2", 00:22:35.214 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:35.214 "is_configured": true, 00:22:35.214 "data_offset": 256, 00:22:35.214 "data_size": 7936 00:22:35.214 } 00:22:35.214 ] 00:22:35.214 }' 00:22:35.214 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:35.214 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:35.214 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:35.214 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:35.214 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:22:35.214 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.214 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.472 [2024-11-20 07:19:32.532636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:35.472 [2024-11-20 07:19:32.552010] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:35.472 [2024-11-20 07:19:32.552101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.472 [2024-11-20 07:19:32.552129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:35.472 [2024-11-20 07:19:32.552140] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:35.472 "name": "raid_bdev1", 00:22:35.472 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:35.472 "strip_size_kb": 0, 00:22:35.472 "state": "online", 00:22:35.472 "raid_level": "raid1", 00:22:35.472 "superblock": true, 00:22:35.472 "num_base_bdevs": 2, 00:22:35.472 "num_base_bdevs_discovered": 1, 00:22:35.472 "num_base_bdevs_operational": 1, 00:22:35.472 "base_bdevs_list": [ 00:22:35.472 { 00:22:35.472 "name": null, 00:22:35.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:35.472 "is_configured": false, 00:22:35.472 "data_offset": 0, 00:22:35.472 "data_size": 7936 00:22:35.472 }, 00:22:35.472 { 00:22:35.472 "name": "BaseBdev2", 00:22:35.472 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:35.472 "is_configured": true, 00:22:35.472 "data_offset": 256, 00:22:35.472 "data_size": 7936 00:22:35.472 } 00:22:35.472 ] 00:22:35.472 }' 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:35.472 07:19:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.037 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:36.037 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:36.037 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:36.037 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:36.037 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:36.038 "name": "raid_bdev1", 00:22:36.038 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:36.038 "strip_size_kb": 0, 00:22:36.038 "state": "online", 00:22:36.038 "raid_level": "raid1", 00:22:36.038 "superblock": true, 00:22:36.038 "num_base_bdevs": 2, 00:22:36.038 "num_base_bdevs_discovered": 1, 00:22:36.038 "num_base_bdevs_operational": 1, 00:22:36.038 "base_bdevs_list": [ 00:22:36.038 { 00:22:36.038 "name": null, 00:22:36.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:36.038 "is_configured": false, 00:22:36.038 "data_offset": 0, 00:22:36.038 "data_size": 7936 00:22:36.038 }, 00:22:36.038 { 00:22:36.038 "name": "BaseBdev2", 00:22:36.038 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:36.038 "is_configured": true, 00:22:36.038 "data_offset": 256, 00:22:36.038 "data_size": 7936 00:22:36.038 } 00:22:36.038 ] 00:22:36.038 }' 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.038 [2024-11-20 07:19:33.254542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:36.038 [2024-11-20 07:19:33.254609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.038 [2024-11-20 07:19:33.254645] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:36.038 [2024-11-20 07:19:33.254660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.038 [2024-11-20 07:19:33.254954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.038 [2024-11-20 07:19:33.254978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:36.038 [2024-11-20 07:19:33.255045] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:36.038 [2024-11-20 07:19:33.255065] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:36.038 [2024-11-20 07:19:33.255082] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:36.038 [2024-11-20 07:19:33.255095] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:22:36.038 BaseBdev1 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.038 07:19:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:36.974 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.232 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:37.232 "name": "raid_bdev1", 00:22:37.232 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:37.232 "strip_size_kb": 0, 00:22:37.232 "state": "online", 00:22:37.232 "raid_level": "raid1", 00:22:37.232 "superblock": true, 00:22:37.232 "num_base_bdevs": 2, 00:22:37.232 "num_base_bdevs_discovered": 1, 00:22:37.232 "num_base_bdevs_operational": 1, 00:22:37.232 "base_bdevs_list": [ 00:22:37.232 { 00:22:37.232 "name": null, 00:22:37.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.232 "is_configured": false, 00:22:37.232 "data_offset": 0, 00:22:37.232 "data_size": 7936 00:22:37.232 }, 00:22:37.232 { 00:22:37.232 "name": "BaseBdev2", 00:22:37.232 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:37.232 "is_configured": true, 00:22:37.232 "data_offset": 256, 00:22:37.232 "data_size": 7936 00:22:37.232 } 00:22:37.232 ] 00:22:37.232 }' 00:22:37.232 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:37.232 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.491 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:37.491 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:37.491 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:37.491 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:37.491 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:37.491 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:37.491 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.491 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.491 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.491 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:37.749 "name": "raid_bdev1", 00:22:37.749 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:37.749 "strip_size_kb": 0, 00:22:37.749 "state": "online", 00:22:37.749 "raid_level": "raid1", 00:22:37.749 "superblock": true, 00:22:37.749 "num_base_bdevs": 2, 00:22:37.749 "num_base_bdevs_discovered": 1, 00:22:37.749 "num_base_bdevs_operational": 1, 00:22:37.749 "base_bdevs_list": [ 00:22:37.749 { 00:22:37.749 "name": null, 00:22:37.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:37.749 "is_configured": false, 00:22:37.749 "data_offset": 0, 00:22:37.749 "data_size": 7936 00:22:37.749 }, 00:22:37.749 { 00:22:37.749 "name": "BaseBdev2", 00:22:37.749 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:37.749 "is_configured": true, 00:22:37.749 "data_offset": 256, 00:22:37.749 "data_size": 7936 00:22:37.749 } 00:22:37.749 ] 00:22:37.749 }' 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.749 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:37.749 [2024-11-20 07:19:34.947161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.749 [2024-11-20 07:19:34.947572] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:22:37.750 [2024-11-20 07:19:34.947607] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:22:37.750 request: 00:22:37.750 { 00:22:37.750 "base_bdev": "BaseBdev1", 00:22:37.750 "raid_bdev": "raid_bdev1", 00:22:37.750 "method": "bdev_raid_add_base_bdev", 00:22:37.750 "req_id": 1 00:22:37.750 } 00:22:37.750 Got JSON-RPC error response 00:22:37.750 response: 00:22:37.750 { 00:22:37.750 "code": -22, 00:22:37.750 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:22:37.750 } 00:22:37.750 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:37.750 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:22:37.750 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:37.750 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:37.750 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:37.750 07:19:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:38.685 07:19:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.943 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:38.943 "name": "raid_bdev1", 00:22:38.943 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:38.943 "strip_size_kb": 0, 00:22:38.943 "state": "online", 00:22:38.943 "raid_level": "raid1", 00:22:38.943 "superblock": true, 00:22:38.943 "num_base_bdevs": 2, 00:22:38.943 "num_base_bdevs_discovered": 1, 00:22:38.943 "num_base_bdevs_operational": 1, 00:22:38.943 "base_bdevs_list": [ 00:22:38.943 { 00:22:38.943 "name": null, 00:22:38.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.943 "is_configured": false, 00:22:38.943 "data_offset": 0, 00:22:38.943 "data_size": 7936 00:22:38.943 }, 00:22:38.943 { 00:22:38.943 "name": "BaseBdev2", 00:22:38.943 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:38.943 "is_configured": true, 00:22:38.943 "data_offset": 256, 00:22:38.943 "data_size": 7936 00:22:38.943 } 00:22:38.943 ] 00:22:38.943 }' 00:22:38.943 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:38.943 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:39.201 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:39.201 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:39.201 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:39.201 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:39.201 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:39.201 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:39.201 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.201 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:39.201 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.201 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:39.460 "name": "raid_bdev1", 00:22:39.460 "uuid": "9ca36d23-b39f-4af3-ba4c-02b2222e1e8f", 00:22:39.460 "strip_size_kb": 0, 00:22:39.460 "state": "online", 00:22:39.460 "raid_level": "raid1", 00:22:39.460 "superblock": true, 00:22:39.460 "num_base_bdevs": 2, 00:22:39.460 "num_base_bdevs_discovered": 1, 00:22:39.460 "num_base_bdevs_operational": 1, 00:22:39.460 "base_bdevs_list": [ 00:22:39.460 { 00:22:39.460 "name": null, 00:22:39.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.460 "is_configured": false, 00:22:39.460 "data_offset": 0, 00:22:39.460 "data_size": 7936 00:22:39.460 }, 00:22:39.460 { 00:22:39.460 "name": "BaseBdev2", 00:22:39.460 "uuid": "2c551d76-03ec-56f9-8db0-960eb96bdca0", 00:22:39.460 "is_configured": true, 00:22:39.460 "data_offset": 256, 00:22:39.460 "data_size": 7936 00:22:39.460 } 00:22:39.460 ] 00:22:39.460 }' 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88135 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88135 ']' 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88135 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88135 00:22:39.460 killing process with pid 88135 00:22:39.460 Received shutdown signal, test time was about 60.000000 seconds 00:22:39.460 00:22:39.460 Latency(us) 00:22:39.460 [2024-11-20T07:19:36.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.460 [2024-11-20T07:19:36.780Z] =================================================================================================================== 00:22:39.460 [2024-11-20T07:19:36.780Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88135' 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88135 00:22:39.460 [2024-11-20 07:19:36.665611] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:39.460 07:19:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88135 00:22:39.460 [2024-11-20 07:19:36.665784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:39.460 [2024-11-20 07:19:36.665896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:39.460 [2024-11-20 07:19:36.665923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:22:39.718 [2024-11-20 07:19:36.984132] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:41.095 ************************************ 00:22:41.095 END TEST raid_rebuild_test_sb_md_separate 00:22:41.095 ************************************ 00:22:41.095 07:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:22:41.095 00:22:41.095 real 0m21.837s 00:22:41.095 user 0m29.596s 00:22:41.095 sys 0m2.483s 00:22:41.095 07:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:41.095 07:19:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:22:41.095 07:19:38 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:22:41.095 07:19:38 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:22:41.095 07:19:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:41.095 07:19:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:41.095 07:19:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:41.095 ************************************ 00:22:41.095 START TEST raid_state_function_test_sb_md_interleaved 00:22:41.095 ************************************ 00:22:41.095 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:22:41.095 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:22:41.095 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:22:41.095 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:22:41.095 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:22:41.095 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:22:41.095 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:41.095 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:22:41.095 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88836 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88836' 00:22:41.096 Process raid pid: 88836 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88836 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88836 ']' 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.096 07:19:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:41.096 [2024-11-20 07:19:38.220426] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:22:41.096 [2024-11-20 07:19:38.220639] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.096 [2024-11-20 07:19:38.399795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.355 [2024-11-20 07:19:38.536465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.613 [2024-11-20 07:19:38.745946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:41.613 [2024-11-20 07:19:38.746209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.182 [2024-11-20 07:19:39.253959] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:42.182 [2024-11-20 07:19:39.254021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:42.182 [2024-11-20 07:19:39.254038] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:42.182 [2024-11-20 07:19:39.254053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.182 "name": "Existed_Raid", 00:22:42.182 "uuid": "9c60d59d-0049-4442-9680-10cde7ca24a1", 00:22:42.182 "strip_size_kb": 0, 00:22:42.182 "state": "configuring", 00:22:42.182 "raid_level": "raid1", 00:22:42.182 "superblock": true, 00:22:42.182 "num_base_bdevs": 2, 00:22:42.182 "num_base_bdevs_discovered": 0, 00:22:42.182 "num_base_bdevs_operational": 2, 00:22:42.182 "base_bdevs_list": [ 00:22:42.182 { 00:22:42.182 "name": "BaseBdev1", 00:22:42.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.182 "is_configured": false, 00:22:42.182 "data_offset": 0, 00:22:42.182 "data_size": 0 00:22:42.182 }, 00:22:42.182 { 00:22:42.182 "name": "BaseBdev2", 00:22:42.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.182 "is_configured": false, 00:22:42.182 "data_offset": 0, 00:22:42.182 "data_size": 0 00:22:42.182 } 00:22:42.182 ] 00:22:42.182 }' 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.182 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.751 [2024-11-20 07:19:39.786051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:42.751 [2024-11-20 07:19:39.786090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.751 [2024-11-20 07:19:39.794035] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:42.751 [2024-11-20 07:19:39.794207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:42.751 [2024-11-20 07:19:39.794324] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:42.751 [2024-11-20 07:19:39.794469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.751 [2024-11-20 07:19:39.843920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:42.751 BaseBdev1 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.751 [ 00:22:42.751 { 00:22:42.751 "name": "BaseBdev1", 00:22:42.751 "aliases": [ 00:22:42.751 "6fd51a9f-e016-4471-948f-072b26bdd463" 00:22:42.751 ], 00:22:42.751 "product_name": "Malloc disk", 00:22:42.751 "block_size": 4128, 00:22:42.751 "num_blocks": 8192, 00:22:42.751 "uuid": "6fd51a9f-e016-4471-948f-072b26bdd463", 00:22:42.751 "md_size": 32, 00:22:42.751 "md_interleave": true, 00:22:42.751 "dif_type": 0, 00:22:42.751 "assigned_rate_limits": { 00:22:42.751 "rw_ios_per_sec": 0, 00:22:42.751 "rw_mbytes_per_sec": 0, 00:22:42.751 "r_mbytes_per_sec": 0, 00:22:42.751 "w_mbytes_per_sec": 0 00:22:42.751 }, 00:22:42.751 "claimed": true, 00:22:42.751 "claim_type": "exclusive_write", 00:22:42.751 "zoned": false, 00:22:42.751 "supported_io_types": { 00:22:42.751 "read": true, 00:22:42.751 "write": true, 00:22:42.751 "unmap": true, 00:22:42.751 "flush": true, 00:22:42.751 "reset": true, 00:22:42.751 "nvme_admin": false, 00:22:42.751 "nvme_io": false, 00:22:42.751 "nvme_io_md": false, 00:22:42.751 "write_zeroes": true, 00:22:42.751 "zcopy": true, 00:22:42.751 "get_zone_info": false, 00:22:42.751 "zone_management": false, 00:22:42.751 "zone_append": false, 00:22:42.751 "compare": false, 00:22:42.751 "compare_and_write": false, 00:22:42.751 "abort": true, 00:22:42.751 "seek_hole": false, 00:22:42.751 "seek_data": false, 00:22:42.751 "copy": true, 00:22:42.751 "nvme_iov_md": false 00:22:42.751 }, 00:22:42.751 "memory_domains": [ 00:22:42.751 { 00:22:42.751 "dma_device_id": "system", 00:22:42.751 "dma_device_type": 1 00:22:42.751 }, 00:22:42.751 { 00:22:42.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:42.751 "dma_device_type": 2 00:22:42.751 } 00:22:42.751 ], 00:22:42.751 "driver_specific": {} 00:22:42.751 } 00:22:42.751 ] 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:42.751 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.752 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.752 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:42.752 "name": "Existed_Raid", 00:22:42.752 "uuid": "55334641-4e92-4046-99b2-a14f54e4ae33", 00:22:42.752 "strip_size_kb": 0, 00:22:42.752 "state": "configuring", 00:22:42.752 "raid_level": "raid1", 00:22:42.752 "superblock": true, 00:22:42.752 "num_base_bdevs": 2, 00:22:42.752 "num_base_bdevs_discovered": 1, 00:22:42.752 "num_base_bdevs_operational": 2, 00:22:42.752 "base_bdevs_list": [ 00:22:42.752 { 00:22:42.752 "name": "BaseBdev1", 00:22:42.752 "uuid": "6fd51a9f-e016-4471-948f-072b26bdd463", 00:22:42.752 "is_configured": true, 00:22:42.752 "data_offset": 256, 00:22:42.752 "data_size": 7936 00:22:42.752 }, 00:22:42.752 { 00:22:42.752 "name": "BaseBdev2", 00:22:42.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.752 "is_configured": false, 00:22:42.752 "data_offset": 0, 00:22:42.752 "data_size": 0 00:22:42.752 } 00:22:42.752 ] 00:22:42.752 }' 00:22:42.752 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:42.752 07:19:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.320 [2024-11-20 07:19:40.408244] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:43.320 [2024-11-20 07:19:40.408306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.320 [2024-11-20 07:19:40.416299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:43.320 [2024-11-20 07:19:40.418814] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:43.320 [2024-11-20 07:19:40.418894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.320 "name": "Existed_Raid", 00:22:43.320 "uuid": "65ba8195-41c0-4ad0-8580-361beab5e41e", 00:22:43.320 "strip_size_kb": 0, 00:22:43.320 "state": "configuring", 00:22:43.320 "raid_level": "raid1", 00:22:43.320 "superblock": true, 00:22:43.320 "num_base_bdevs": 2, 00:22:43.320 "num_base_bdevs_discovered": 1, 00:22:43.320 "num_base_bdevs_operational": 2, 00:22:43.320 "base_bdevs_list": [ 00:22:43.320 { 00:22:43.320 "name": "BaseBdev1", 00:22:43.320 "uuid": "6fd51a9f-e016-4471-948f-072b26bdd463", 00:22:43.320 "is_configured": true, 00:22:43.320 "data_offset": 256, 00:22:43.320 "data_size": 7936 00:22:43.320 }, 00:22:43.320 { 00:22:43.320 "name": "BaseBdev2", 00:22:43.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.320 "is_configured": false, 00:22:43.320 "data_offset": 0, 00:22:43.320 "data_size": 0 00:22:43.320 } 00:22:43.320 ] 00:22:43.320 }' 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.320 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.888 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:22:43.888 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.888 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.888 [2024-11-20 07:19:40.990718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:43.888 [2024-11-20 07:19:40.991015] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:43.888 [2024-11-20 07:19:40.991035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:43.888 [2024-11-20 07:19:40.991175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:43.888 [2024-11-20 07:19:40.991278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:43.888 [2024-11-20 07:19:40.991297] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:22:43.888 [2024-11-20 07:19:40.991381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.888 BaseBdev2 00:22:43.888 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.888 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:22:43.888 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:22:43.888 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:43.888 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:22:43.888 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:43.888 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:43.888 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:22:43.888 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.888 07:19:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.888 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.888 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:43.888 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.888 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.888 [ 00:22:43.888 { 00:22:43.888 "name": "BaseBdev2", 00:22:43.888 "aliases": [ 00:22:43.888 "b35a140e-1b93-4fea-83e5-010323037149" 00:22:43.888 ], 00:22:43.888 "product_name": "Malloc disk", 00:22:43.888 "block_size": 4128, 00:22:43.888 "num_blocks": 8192, 00:22:43.888 "uuid": "b35a140e-1b93-4fea-83e5-010323037149", 00:22:43.888 "md_size": 32, 00:22:43.888 "md_interleave": true, 00:22:43.888 "dif_type": 0, 00:22:43.888 "assigned_rate_limits": { 00:22:43.888 "rw_ios_per_sec": 0, 00:22:43.888 "rw_mbytes_per_sec": 0, 00:22:43.888 "r_mbytes_per_sec": 0, 00:22:43.889 "w_mbytes_per_sec": 0 00:22:43.889 }, 00:22:43.889 "claimed": true, 00:22:43.889 "claim_type": "exclusive_write", 00:22:43.889 "zoned": false, 00:22:43.889 "supported_io_types": { 00:22:43.889 "read": true, 00:22:43.889 "write": true, 00:22:43.889 "unmap": true, 00:22:43.889 "flush": true, 00:22:43.889 "reset": true, 00:22:43.889 "nvme_admin": false, 00:22:43.889 "nvme_io": false, 00:22:43.889 "nvme_io_md": false, 00:22:43.889 "write_zeroes": true, 00:22:43.889 "zcopy": true, 00:22:43.889 "get_zone_info": false, 00:22:43.889 "zone_management": false, 00:22:43.889 "zone_append": false, 00:22:43.889 "compare": false, 00:22:43.889 "compare_and_write": false, 00:22:43.889 "abort": true, 00:22:43.889 "seek_hole": false, 00:22:43.889 "seek_data": false, 00:22:43.889 "copy": true, 00:22:43.889 "nvme_iov_md": false 00:22:43.889 }, 00:22:43.889 "memory_domains": [ 00:22:43.889 { 00:22:43.889 "dma_device_id": "system", 00:22:43.889 "dma_device_type": 1 00:22:43.889 }, 00:22:43.889 { 00:22:43.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:43.889 "dma_device_type": 2 00:22:43.889 } 00:22:43.889 ], 00:22:43.889 "driver_specific": {} 00:22:43.889 } 00:22:43.889 ] 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:43.889 "name": "Existed_Raid", 00:22:43.889 "uuid": "65ba8195-41c0-4ad0-8580-361beab5e41e", 00:22:43.889 "strip_size_kb": 0, 00:22:43.889 "state": "online", 00:22:43.889 "raid_level": "raid1", 00:22:43.889 "superblock": true, 00:22:43.889 "num_base_bdevs": 2, 00:22:43.889 "num_base_bdevs_discovered": 2, 00:22:43.889 "num_base_bdevs_operational": 2, 00:22:43.889 "base_bdevs_list": [ 00:22:43.889 { 00:22:43.889 "name": "BaseBdev1", 00:22:43.889 "uuid": "6fd51a9f-e016-4471-948f-072b26bdd463", 00:22:43.889 "is_configured": true, 00:22:43.889 "data_offset": 256, 00:22:43.889 "data_size": 7936 00:22:43.889 }, 00:22:43.889 { 00:22:43.889 "name": "BaseBdev2", 00:22:43.889 "uuid": "b35a140e-1b93-4fea-83e5-010323037149", 00:22:43.889 "is_configured": true, 00:22:43.889 "data_offset": 256, 00:22:43.889 "data_size": 7936 00:22:43.889 } 00:22:43.889 ] 00:22:43.889 }' 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:43.889 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.458 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:22:44.458 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:22:44.458 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:44.458 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.459 [2024-11-20 07:19:41.535357] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:44.459 "name": "Existed_Raid", 00:22:44.459 "aliases": [ 00:22:44.459 "65ba8195-41c0-4ad0-8580-361beab5e41e" 00:22:44.459 ], 00:22:44.459 "product_name": "Raid Volume", 00:22:44.459 "block_size": 4128, 00:22:44.459 "num_blocks": 7936, 00:22:44.459 "uuid": "65ba8195-41c0-4ad0-8580-361beab5e41e", 00:22:44.459 "md_size": 32, 00:22:44.459 "md_interleave": true, 00:22:44.459 "dif_type": 0, 00:22:44.459 "assigned_rate_limits": { 00:22:44.459 "rw_ios_per_sec": 0, 00:22:44.459 "rw_mbytes_per_sec": 0, 00:22:44.459 "r_mbytes_per_sec": 0, 00:22:44.459 "w_mbytes_per_sec": 0 00:22:44.459 }, 00:22:44.459 "claimed": false, 00:22:44.459 "zoned": false, 00:22:44.459 "supported_io_types": { 00:22:44.459 "read": true, 00:22:44.459 "write": true, 00:22:44.459 "unmap": false, 00:22:44.459 "flush": false, 00:22:44.459 "reset": true, 00:22:44.459 "nvme_admin": false, 00:22:44.459 "nvme_io": false, 00:22:44.459 "nvme_io_md": false, 00:22:44.459 "write_zeroes": true, 00:22:44.459 "zcopy": false, 00:22:44.459 "get_zone_info": false, 00:22:44.459 "zone_management": false, 00:22:44.459 "zone_append": false, 00:22:44.459 "compare": false, 00:22:44.459 "compare_and_write": false, 00:22:44.459 "abort": false, 00:22:44.459 "seek_hole": false, 00:22:44.459 "seek_data": false, 00:22:44.459 "copy": false, 00:22:44.459 "nvme_iov_md": false 00:22:44.459 }, 00:22:44.459 "memory_domains": [ 00:22:44.459 { 00:22:44.459 "dma_device_id": "system", 00:22:44.459 "dma_device_type": 1 00:22:44.459 }, 00:22:44.459 { 00:22:44.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.459 "dma_device_type": 2 00:22:44.459 }, 00:22:44.459 { 00:22:44.459 "dma_device_id": "system", 00:22:44.459 "dma_device_type": 1 00:22:44.459 }, 00:22:44.459 { 00:22:44.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:44.459 "dma_device_type": 2 00:22:44.459 } 00:22:44.459 ], 00:22:44.459 "driver_specific": { 00:22:44.459 "raid": { 00:22:44.459 "uuid": "65ba8195-41c0-4ad0-8580-361beab5e41e", 00:22:44.459 "strip_size_kb": 0, 00:22:44.459 "state": "online", 00:22:44.459 "raid_level": "raid1", 00:22:44.459 "superblock": true, 00:22:44.459 "num_base_bdevs": 2, 00:22:44.459 "num_base_bdevs_discovered": 2, 00:22:44.459 "num_base_bdevs_operational": 2, 00:22:44.459 "base_bdevs_list": [ 00:22:44.459 { 00:22:44.459 "name": "BaseBdev1", 00:22:44.459 "uuid": "6fd51a9f-e016-4471-948f-072b26bdd463", 00:22:44.459 "is_configured": true, 00:22:44.459 "data_offset": 256, 00:22:44.459 "data_size": 7936 00:22:44.459 }, 00:22:44.459 { 00:22:44.459 "name": "BaseBdev2", 00:22:44.459 "uuid": "b35a140e-1b93-4fea-83e5-010323037149", 00:22:44.459 "is_configured": true, 00:22:44.459 "data_offset": 256, 00:22:44.459 "data_size": 7936 00:22:44.459 } 00:22:44.459 ] 00:22:44.459 } 00:22:44.459 } 00:22:44.459 }' 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:22:44.459 BaseBdev2' 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.459 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.718 [2024-11-20 07:19:41.807097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:44.718 "name": "Existed_Raid", 00:22:44.718 "uuid": "65ba8195-41c0-4ad0-8580-361beab5e41e", 00:22:44.718 "strip_size_kb": 0, 00:22:44.718 "state": "online", 00:22:44.718 "raid_level": "raid1", 00:22:44.718 "superblock": true, 00:22:44.718 "num_base_bdevs": 2, 00:22:44.718 "num_base_bdevs_discovered": 1, 00:22:44.718 "num_base_bdevs_operational": 1, 00:22:44.718 "base_bdevs_list": [ 00:22:44.718 { 00:22:44.718 "name": null, 00:22:44.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:44.718 "is_configured": false, 00:22:44.718 "data_offset": 0, 00:22:44.718 "data_size": 7936 00:22:44.718 }, 00:22:44.718 { 00:22:44.718 "name": "BaseBdev2", 00:22:44.718 "uuid": "b35a140e-1b93-4fea-83e5-010323037149", 00:22:44.718 "is_configured": true, 00:22:44.718 "data_offset": 256, 00:22:44.718 "data_size": 7936 00:22:44.718 } 00:22:44.718 ] 00:22:44.718 }' 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:44.718 07:19:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.286 [2024-11-20 07:19:42.465827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:45.286 [2024-11-20 07:19:42.465985] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:45.286 [2024-11-20 07:19:42.551759] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:45.286 [2024-11-20 07:19:42.551845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:45.286 [2024-11-20 07:19:42.551898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:45.286 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88836 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88836 ']' 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88836 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88836 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:45.546 killing process with pid 88836 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88836' 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88836 00:22:45.546 [2024-11-20 07:19:42.640250] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:45.546 07:19:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88836 00:22:45.546 [2024-11-20 07:19:42.654826] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:46.485 07:19:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:22:46.485 00:22:46.485 real 0m5.577s 00:22:46.485 user 0m8.454s 00:22:46.485 sys 0m0.814s 00:22:46.485 07:19:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.485 07:19:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.485 ************************************ 00:22:46.485 END TEST raid_state_function_test_sb_md_interleaved 00:22:46.485 ************************************ 00:22:46.485 07:19:43 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:22:46.485 07:19:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:46.485 07:19:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.485 07:19:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:46.485 ************************************ 00:22:46.485 START TEST raid_superblock_test_md_interleaved 00:22:46.485 ************************************ 00:22:46.485 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:22:46.485 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:22:46.485 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:22:46.485 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:22:46.485 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:22:46.485 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89092 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89092 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89092 ']' 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.486 07:19:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:46.746 [2024-11-20 07:19:43.867287] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:22:46.746 [2024-11-20 07:19:43.867483] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89092 ] 00:22:46.746 [2024-11-20 07:19:44.055818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.004 [2024-11-20 07:19:44.192986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.262 [2024-11-20 07:19:44.400879] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.263 [2024-11-20 07:19:44.400963] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.832 malloc1 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.832 [2024-11-20 07:19:44.931239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:47.832 [2024-11-20 07:19:44.931337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.832 [2024-11-20 07:19:44.931370] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:47.832 [2024-11-20 07:19:44.931387] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.832 [2024-11-20 07:19:44.934123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.832 [2024-11-20 07:19:44.934166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:47.832 pt1 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.832 malloc2 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.832 [2024-11-20 07:19:44.984978] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:47.832 [2024-11-20 07:19:44.985044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:47.832 [2024-11-20 07:19:44.985083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:47.832 [2024-11-20 07:19:44.985098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:47.832 [2024-11-20 07:19:44.987527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:47.832 [2024-11-20 07:19:44.987567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:47.832 pt2 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.832 [2024-11-20 07:19:44.993021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:47.832 [2024-11-20 07:19:44.995607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:47.832 [2024-11-20 07:19:44.995935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:47.832 [2024-11-20 07:19:44.995964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:47.832 [2024-11-20 07:19:44.996067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:47.832 [2024-11-20 07:19:44.996206] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:47.832 [2024-11-20 07:19:44.996240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:47.832 [2024-11-20 07:19:44.996334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:47.832 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:47.833 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:47.833 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:47.833 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:47.833 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.833 07:19:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:47.833 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.833 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:47.833 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.833 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:47.833 "name": "raid_bdev1", 00:22:47.833 "uuid": "583c919a-dceb-467b-83b4-bd2384e62ca2", 00:22:47.833 "strip_size_kb": 0, 00:22:47.833 "state": "online", 00:22:47.833 "raid_level": "raid1", 00:22:47.833 "superblock": true, 00:22:47.833 "num_base_bdevs": 2, 00:22:47.833 "num_base_bdevs_discovered": 2, 00:22:47.833 "num_base_bdevs_operational": 2, 00:22:47.833 "base_bdevs_list": [ 00:22:47.833 { 00:22:47.833 "name": "pt1", 00:22:47.833 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:47.833 "is_configured": true, 00:22:47.833 "data_offset": 256, 00:22:47.833 "data_size": 7936 00:22:47.833 }, 00:22:47.833 { 00:22:47.833 "name": "pt2", 00:22:47.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:47.833 "is_configured": true, 00:22:47.833 "data_offset": 256, 00:22:47.833 "data_size": 7936 00:22:47.833 } 00:22:47.833 ] 00:22:47.833 }' 00:22:47.833 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:47.833 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.399 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:22:48.399 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:48.399 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:48.399 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:48.399 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:48.399 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:48.399 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:48.399 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.399 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.399 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:48.399 [2024-11-20 07:19:45.521515] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:48.399 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.399 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:48.399 "name": "raid_bdev1", 00:22:48.399 "aliases": [ 00:22:48.399 "583c919a-dceb-467b-83b4-bd2384e62ca2" 00:22:48.399 ], 00:22:48.400 "product_name": "Raid Volume", 00:22:48.400 "block_size": 4128, 00:22:48.400 "num_blocks": 7936, 00:22:48.400 "uuid": "583c919a-dceb-467b-83b4-bd2384e62ca2", 00:22:48.400 "md_size": 32, 00:22:48.400 "md_interleave": true, 00:22:48.400 "dif_type": 0, 00:22:48.400 "assigned_rate_limits": { 00:22:48.400 "rw_ios_per_sec": 0, 00:22:48.400 "rw_mbytes_per_sec": 0, 00:22:48.400 "r_mbytes_per_sec": 0, 00:22:48.400 "w_mbytes_per_sec": 0 00:22:48.400 }, 00:22:48.400 "claimed": false, 00:22:48.400 "zoned": false, 00:22:48.400 "supported_io_types": { 00:22:48.400 "read": true, 00:22:48.400 "write": true, 00:22:48.400 "unmap": false, 00:22:48.400 "flush": false, 00:22:48.400 "reset": true, 00:22:48.400 "nvme_admin": false, 00:22:48.400 "nvme_io": false, 00:22:48.400 "nvme_io_md": false, 00:22:48.400 "write_zeroes": true, 00:22:48.400 "zcopy": false, 00:22:48.400 "get_zone_info": false, 00:22:48.400 "zone_management": false, 00:22:48.400 "zone_append": false, 00:22:48.400 "compare": false, 00:22:48.400 "compare_and_write": false, 00:22:48.400 "abort": false, 00:22:48.400 "seek_hole": false, 00:22:48.400 "seek_data": false, 00:22:48.400 "copy": false, 00:22:48.400 "nvme_iov_md": false 00:22:48.400 }, 00:22:48.400 "memory_domains": [ 00:22:48.400 { 00:22:48.400 "dma_device_id": "system", 00:22:48.400 "dma_device_type": 1 00:22:48.400 }, 00:22:48.400 { 00:22:48.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.400 "dma_device_type": 2 00:22:48.400 }, 00:22:48.400 { 00:22:48.400 "dma_device_id": "system", 00:22:48.400 "dma_device_type": 1 00:22:48.400 }, 00:22:48.400 { 00:22:48.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:48.400 "dma_device_type": 2 00:22:48.400 } 00:22:48.400 ], 00:22:48.400 "driver_specific": { 00:22:48.400 "raid": { 00:22:48.400 "uuid": "583c919a-dceb-467b-83b4-bd2384e62ca2", 00:22:48.400 "strip_size_kb": 0, 00:22:48.400 "state": "online", 00:22:48.400 "raid_level": "raid1", 00:22:48.400 "superblock": true, 00:22:48.400 "num_base_bdevs": 2, 00:22:48.400 "num_base_bdevs_discovered": 2, 00:22:48.400 "num_base_bdevs_operational": 2, 00:22:48.400 "base_bdevs_list": [ 00:22:48.400 { 00:22:48.400 "name": "pt1", 00:22:48.400 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:48.400 "is_configured": true, 00:22:48.400 "data_offset": 256, 00:22:48.400 "data_size": 7936 00:22:48.400 }, 00:22:48.400 { 00:22:48.400 "name": "pt2", 00:22:48.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:48.400 "is_configured": true, 00:22:48.400 "data_offset": 256, 00:22:48.400 "data_size": 7936 00:22:48.400 } 00:22:48.400 ] 00:22:48.400 } 00:22:48.400 } 00:22:48.400 }' 00:22:48.400 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:48.400 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:48.400 pt2' 00:22:48.400 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.400 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:48.400 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:48.400 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:48.400 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.400 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.400 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.400 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:22:48.657 [2024-11-20 07:19:45.813810] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=583c919a-dceb-467b-83b4-bd2384e62ca2 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 583c919a-dceb-467b-83b4-bd2384e62ca2 ']' 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.657 [2024-11-20 07:19:45.869228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:48.657 [2024-11-20 07:19:45.869264] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:48.657 [2024-11-20 07:19:45.869379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:48.657 [2024-11-20 07:19:45.869468] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:48.657 [2024-11-20 07:19:45.869500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:48.657 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.916 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:22:48.916 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:48.916 07:19:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.916 [2024-11-20 07:19:46.009297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:48.916 [2024-11-20 07:19:46.011808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:48.916 [2024-11-20 07:19:46.011963] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:48.916 [2024-11-20 07:19:46.012043] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:48.916 [2024-11-20 07:19:46.012070] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:48.916 [2024-11-20 07:19:46.012085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:22:48.916 request: 00:22:48.916 { 00:22:48.916 "name": "raid_bdev1", 00:22:48.916 "raid_level": "raid1", 00:22:48.916 "base_bdevs": [ 00:22:48.916 "malloc1", 00:22:48.916 "malloc2" 00:22:48.916 ], 00:22:48.916 "superblock": false, 00:22:48.916 "method": "bdev_raid_create", 00:22:48.916 "req_id": 1 00:22:48.916 } 00:22:48.916 Got JSON-RPC error response 00:22:48.916 response: 00:22:48.916 { 00:22:48.916 "code": -17, 00:22:48.916 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:48.916 } 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.916 [2024-11-20 07:19:46.069286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:48.916 [2024-11-20 07:19:46.069361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:48.916 [2024-11-20 07:19:46.069401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:48.916 [2024-11-20 07:19:46.069419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:48.916 [2024-11-20 07:19:46.072021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:48.916 [2024-11-20 07:19:46.072066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:48.916 [2024-11-20 07:19:46.072151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:48.916 [2024-11-20 07:19:46.072248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:48.916 pt1 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:48.916 "name": "raid_bdev1", 00:22:48.916 "uuid": "583c919a-dceb-467b-83b4-bd2384e62ca2", 00:22:48.916 "strip_size_kb": 0, 00:22:48.916 "state": "configuring", 00:22:48.916 "raid_level": "raid1", 00:22:48.916 "superblock": true, 00:22:48.916 "num_base_bdevs": 2, 00:22:48.916 "num_base_bdevs_discovered": 1, 00:22:48.916 "num_base_bdevs_operational": 2, 00:22:48.916 "base_bdevs_list": [ 00:22:48.916 { 00:22:48.916 "name": "pt1", 00:22:48.916 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:48.916 "is_configured": true, 00:22:48.916 "data_offset": 256, 00:22:48.916 "data_size": 7936 00:22:48.916 }, 00:22:48.916 { 00:22:48.916 "name": null, 00:22:48.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:48.916 "is_configured": false, 00:22:48.916 "data_offset": 256, 00:22:48.916 "data_size": 7936 00:22:48.916 } 00:22:48.916 ] 00:22:48.916 }' 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:48.916 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.484 [2024-11-20 07:19:46.589407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:49.484 [2024-11-20 07:19:46.589485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:49.484 [2024-11-20 07:19:46.589517] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:49.484 [2024-11-20 07:19:46.589535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:49.484 [2024-11-20 07:19:46.589746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:49.484 [2024-11-20 07:19:46.589773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:49.484 [2024-11-20 07:19:46.589836] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:49.484 [2024-11-20 07:19:46.589890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:49.484 [2024-11-20 07:19:46.590020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:22:49.484 [2024-11-20 07:19:46.590041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:49.484 [2024-11-20 07:19:46.590129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:49.484 [2024-11-20 07:19:46.590229] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:22:49.484 [2024-11-20 07:19:46.590245] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:22:49.484 [2024-11-20 07:19:46.590330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.484 pt2 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:49.484 "name": "raid_bdev1", 00:22:49.484 "uuid": "583c919a-dceb-467b-83b4-bd2384e62ca2", 00:22:49.484 "strip_size_kb": 0, 00:22:49.484 "state": "online", 00:22:49.484 "raid_level": "raid1", 00:22:49.484 "superblock": true, 00:22:49.484 "num_base_bdevs": 2, 00:22:49.484 "num_base_bdevs_discovered": 2, 00:22:49.484 "num_base_bdevs_operational": 2, 00:22:49.484 "base_bdevs_list": [ 00:22:49.484 { 00:22:49.484 "name": "pt1", 00:22:49.484 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:49.484 "is_configured": true, 00:22:49.484 "data_offset": 256, 00:22:49.484 "data_size": 7936 00:22:49.484 }, 00:22:49.484 { 00:22:49.484 "name": "pt2", 00:22:49.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:49.484 "is_configured": true, 00:22:49.484 "data_offset": 256, 00:22:49.484 "data_size": 7936 00:22:49.484 } 00:22:49.484 ] 00:22:49.484 }' 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:49.484 07:19:46 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.052 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:22:50.052 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:22:50.052 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:22:50.052 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:22:50.052 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:22:50.052 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:22:50.052 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:22:50.052 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:50.052 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.052 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.052 [2024-11-20 07:19:47.121960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:50.052 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.052 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:22:50.052 "name": "raid_bdev1", 00:22:50.052 "aliases": [ 00:22:50.052 "583c919a-dceb-467b-83b4-bd2384e62ca2" 00:22:50.052 ], 00:22:50.052 "product_name": "Raid Volume", 00:22:50.052 "block_size": 4128, 00:22:50.052 "num_blocks": 7936, 00:22:50.052 "uuid": "583c919a-dceb-467b-83b4-bd2384e62ca2", 00:22:50.052 "md_size": 32, 00:22:50.052 "md_interleave": true, 00:22:50.052 "dif_type": 0, 00:22:50.052 "assigned_rate_limits": { 00:22:50.052 "rw_ios_per_sec": 0, 00:22:50.052 "rw_mbytes_per_sec": 0, 00:22:50.052 "r_mbytes_per_sec": 0, 00:22:50.052 "w_mbytes_per_sec": 0 00:22:50.052 }, 00:22:50.052 "claimed": false, 00:22:50.052 "zoned": false, 00:22:50.052 "supported_io_types": { 00:22:50.052 "read": true, 00:22:50.052 "write": true, 00:22:50.052 "unmap": false, 00:22:50.052 "flush": false, 00:22:50.052 "reset": true, 00:22:50.052 "nvme_admin": false, 00:22:50.052 "nvme_io": false, 00:22:50.053 "nvme_io_md": false, 00:22:50.053 "write_zeroes": true, 00:22:50.053 "zcopy": false, 00:22:50.053 "get_zone_info": false, 00:22:50.053 "zone_management": false, 00:22:50.053 "zone_append": false, 00:22:50.053 "compare": false, 00:22:50.053 "compare_and_write": false, 00:22:50.053 "abort": false, 00:22:50.053 "seek_hole": false, 00:22:50.053 "seek_data": false, 00:22:50.053 "copy": false, 00:22:50.053 "nvme_iov_md": false 00:22:50.053 }, 00:22:50.053 "memory_domains": [ 00:22:50.053 { 00:22:50.053 "dma_device_id": "system", 00:22:50.053 "dma_device_type": 1 00:22:50.053 }, 00:22:50.053 { 00:22:50.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.053 "dma_device_type": 2 00:22:50.053 }, 00:22:50.053 { 00:22:50.053 "dma_device_id": "system", 00:22:50.053 "dma_device_type": 1 00:22:50.053 }, 00:22:50.053 { 00:22:50.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.053 "dma_device_type": 2 00:22:50.053 } 00:22:50.053 ], 00:22:50.053 "driver_specific": { 00:22:50.053 "raid": { 00:22:50.053 "uuid": "583c919a-dceb-467b-83b4-bd2384e62ca2", 00:22:50.053 "strip_size_kb": 0, 00:22:50.053 "state": "online", 00:22:50.053 "raid_level": "raid1", 00:22:50.053 "superblock": true, 00:22:50.053 "num_base_bdevs": 2, 00:22:50.053 "num_base_bdevs_discovered": 2, 00:22:50.053 "num_base_bdevs_operational": 2, 00:22:50.053 "base_bdevs_list": [ 00:22:50.053 { 00:22:50.053 "name": "pt1", 00:22:50.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:50.053 "is_configured": true, 00:22:50.053 "data_offset": 256, 00:22:50.053 "data_size": 7936 00:22:50.053 }, 00:22:50.053 { 00:22:50.053 "name": "pt2", 00:22:50.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:50.053 "is_configured": true, 00:22:50.053 "data_offset": 256, 00:22:50.053 "data_size": 7936 00:22:50.053 } 00:22:50.053 ] 00:22:50.053 } 00:22:50.053 } 00:22:50.053 }' 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:22:50.053 pt2' 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.053 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.312 [2024-11-20 07:19:47.390046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 583c919a-dceb-467b-83b4-bd2384e62ca2 '!=' 583c919a-dceb-467b-83b4-bd2384e62ca2 ']' 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.312 [2024-11-20 07:19:47.437790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.312 "name": "raid_bdev1", 00:22:50.312 "uuid": "583c919a-dceb-467b-83b4-bd2384e62ca2", 00:22:50.312 "strip_size_kb": 0, 00:22:50.312 "state": "online", 00:22:50.312 "raid_level": "raid1", 00:22:50.312 "superblock": true, 00:22:50.312 "num_base_bdevs": 2, 00:22:50.312 "num_base_bdevs_discovered": 1, 00:22:50.312 "num_base_bdevs_operational": 1, 00:22:50.312 "base_bdevs_list": [ 00:22:50.312 { 00:22:50.312 "name": null, 00:22:50.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.312 "is_configured": false, 00:22:50.312 "data_offset": 0, 00:22:50.312 "data_size": 7936 00:22:50.312 }, 00:22:50.312 { 00:22:50.312 "name": "pt2", 00:22:50.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:50.312 "is_configured": true, 00:22:50.312 "data_offset": 256, 00:22:50.312 "data_size": 7936 00:22:50.312 } 00:22:50.312 ] 00:22:50.312 }' 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.312 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.880 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:50.880 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.880 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.880 [2024-11-20 07:19:47.985833] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:50.880 [2024-11-20 07:19:47.985910] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:50.880 [2024-11-20 07:19:47.986006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:50.880 [2024-11-20 07:19:47.986076] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:50.880 [2024-11-20 07:19:47.986095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:22:50.880 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.880 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.880 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.880 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.880 07:19:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.880 [2024-11-20 07:19:48.061889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:50.880 [2024-11-20 07:19:48.061989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:50.880 [2024-11-20 07:19:48.062015] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:22:50.880 [2024-11-20 07:19:48.062032] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:50.880 [2024-11-20 07:19:48.064675] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:50.880 [2024-11-20 07:19:48.064734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:50.880 [2024-11-20 07:19:48.064805] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:50.880 [2024-11-20 07:19:48.064915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:50.880 [2024-11-20 07:19:48.065030] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:22:50.880 [2024-11-20 07:19:48.065058] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:50.880 [2024-11-20 07:19:48.065170] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:50.880 [2024-11-20 07:19:48.065262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:22:50.880 [2024-11-20 07:19:48.065277] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:22:50.880 [2024-11-20 07:19:48.065365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:50.880 pt2 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.880 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:50.880 "name": "raid_bdev1", 00:22:50.880 "uuid": "583c919a-dceb-467b-83b4-bd2384e62ca2", 00:22:50.881 "strip_size_kb": 0, 00:22:50.881 "state": "online", 00:22:50.881 "raid_level": "raid1", 00:22:50.881 "superblock": true, 00:22:50.881 "num_base_bdevs": 2, 00:22:50.881 "num_base_bdevs_discovered": 1, 00:22:50.881 "num_base_bdevs_operational": 1, 00:22:50.881 "base_bdevs_list": [ 00:22:50.881 { 00:22:50.881 "name": null, 00:22:50.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.881 "is_configured": false, 00:22:50.881 "data_offset": 256, 00:22:50.881 "data_size": 7936 00:22:50.881 }, 00:22:50.881 { 00:22:50.881 "name": "pt2", 00:22:50.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:50.881 "is_configured": true, 00:22:50.881 "data_offset": 256, 00:22:50.881 "data_size": 7936 00:22:50.881 } 00:22:50.881 ] 00:22:50.881 }' 00:22:50.881 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:50.881 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.448 [2024-11-20 07:19:48.610031] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:51.448 [2024-11-20 07:19:48.610070] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:51.448 [2024-11-20 07:19:48.610159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:51.448 [2024-11-20 07:19:48.610228] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:51.448 [2024-11-20 07:19:48.610243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.448 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.448 [2024-11-20 07:19:48.678135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:51.448 [2024-11-20 07:19:48.678219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:51.448 [2024-11-20 07:19:48.678253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:51.448 [2024-11-20 07:19:48.678268] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:51.448 [2024-11-20 07:19:48.680829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:51.448 [2024-11-20 07:19:48.680917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:51.448 [2024-11-20 07:19:48.681000] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:51.448 [2024-11-20 07:19:48.681068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:51.448 [2024-11-20 07:19:48.681199] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:51.449 [2024-11-20 07:19:48.681217] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:51.449 [2024-11-20 07:19:48.681243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:22:51.449 [2024-11-20 07:19:48.681314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:51.449 [2024-11-20 07:19:48.681416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:22:51.449 [2024-11-20 07:19:48.681438] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:51.449 [2024-11-20 07:19:48.681532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:51.449 [2024-11-20 07:19:48.681621] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:22:51.449 [2024-11-20 07:19:48.681641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:22:51.449 [2024-11-20 07:19:48.681743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:51.449 pt1 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:51.449 "name": "raid_bdev1", 00:22:51.449 "uuid": "583c919a-dceb-467b-83b4-bd2384e62ca2", 00:22:51.449 "strip_size_kb": 0, 00:22:51.449 "state": "online", 00:22:51.449 "raid_level": "raid1", 00:22:51.449 "superblock": true, 00:22:51.449 "num_base_bdevs": 2, 00:22:51.449 "num_base_bdevs_discovered": 1, 00:22:51.449 "num_base_bdevs_operational": 1, 00:22:51.449 "base_bdevs_list": [ 00:22:51.449 { 00:22:51.449 "name": null, 00:22:51.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.449 "is_configured": false, 00:22:51.449 "data_offset": 256, 00:22:51.449 "data_size": 7936 00:22:51.449 }, 00:22:51.449 { 00:22:51.449 "name": "pt2", 00:22:51.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:51.449 "is_configured": true, 00:22:51.449 "data_offset": 256, 00:22:51.449 "data_size": 7936 00:22:51.449 } 00:22:51.449 ] 00:22:51.449 }' 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:51.449 07:19:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:52.017 [2024-11-20 07:19:49.278582] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 583c919a-dceb-467b-83b4-bd2384e62ca2 '!=' 583c919a-dceb-467b-83b4-bd2384e62ca2 ']' 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89092 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89092 ']' 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89092 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.017 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89092 00:22:52.275 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:52.276 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:52.276 killing process with pid 89092 00:22:52.276 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89092' 00:22:52.276 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89092 00:22:52.276 [2024-11-20 07:19:49.353444] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:52.276 07:19:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89092 00:22:52.276 [2024-11-20 07:19:49.353564] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:52.276 [2024-11-20 07:19:49.353631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:52.276 [2024-11-20 07:19:49.353653] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:22:52.276 [2024-11-20 07:19:49.534890] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:53.680 07:19:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:22:53.680 00:22:53.680 real 0m6.810s 00:22:53.680 user 0m10.861s 00:22:53.680 sys 0m0.985s 00:22:53.680 07:19:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.680 ************************************ 00:22:53.680 END TEST raid_superblock_test_md_interleaved 00:22:53.680 ************************************ 00:22:53.680 07:19:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.680 07:19:50 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:22:53.680 07:19:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:22:53.680 07:19:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.680 07:19:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:53.680 ************************************ 00:22:53.680 START TEST raid_rebuild_test_sb_md_interleaved 00:22:53.680 ************************************ 00:22:53.680 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89427 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89427 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89427 ']' 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.681 07:19:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:53.681 [2024-11-20 07:19:50.721243] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:22:53.681 [2024-11-20 07:19:50.721409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89427 ] 00:22:53.681 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:53.681 Zero copy mechanism will not be used. 00:22:53.681 [2024-11-20 07:19:50.904228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.943 [2024-11-20 07:19:51.066421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.202 [2024-11-20 07:19:51.320716] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:54.203 [2024-11-20 07:19:51.320804] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.461 BaseBdev1_malloc 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.461 [2024-11-20 07:19:51.769938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:54.461 [2024-11-20 07:19:51.770013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.461 [2024-11-20 07:19:51.770056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:22:54.461 [2024-11-20 07:19:51.770079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.461 [2024-11-20 07:19:51.772832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.461 [2024-11-20 07:19:51.772894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:54.461 BaseBdev1 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.461 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.720 BaseBdev2_malloc 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.720 [2024-11-20 07:19:51.819222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:54.720 [2024-11-20 07:19:51.819320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.720 [2024-11-20 07:19:51.819351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:22:54.720 [2024-11-20 07:19:51.819371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.720 [2024-11-20 07:19:51.821780] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.720 [2024-11-20 07:19:51.821831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:54.720 BaseBdev2 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.720 spare_malloc 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.720 spare_delay 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.720 [2024-11-20 07:19:51.907428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:54.720 [2024-11-20 07:19:51.907513] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:54.720 [2024-11-20 07:19:51.907551] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:54.720 [2024-11-20 07:19:51.907573] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:54.720 [2024-11-20 07:19:51.910681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:54.720 [2024-11-20 07:19:51.910741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:54.720 spare 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.720 [2024-11-20 07:19:51.915721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:54.720 [2024-11-20 07:19:51.918752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:54.720 [2024-11-20 07:19:51.919071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:22:54.720 [2024-11-20 07:19:51.919102] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:22:54.720 [2024-11-20 07:19:51.919224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:22:54.720 [2024-11-20 07:19:51.919349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:22:54.720 [2024-11-20 07:19:51.919380] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:22:54.720 [2024-11-20 07:19:51.919497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.720 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:54.720 "name": "raid_bdev1", 00:22:54.720 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:22:54.720 "strip_size_kb": 0, 00:22:54.720 "state": "online", 00:22:54.720 "raid_level": "raid1", 00:22:54.720 "superblock": true, 00:22:54.720 "num_base_bdevs": 2, 00:22:54.720 "num_base_bdevs_discovered": 2, 00:22:54.720 "num_base_bdevs_operational": 2, 00:22:54.720 "base_bdevs_list": [ 00:22:54.720 { 00:22:54.721 "name": "BaseBdev1", 00:22:54.721 "uuid": "8194c230-3912-548c-add2-7299736b2796", 00:22:54.721 "is_configured": true, 00:22:54.721 "data_offset": 256, 00:22:54.721 "data_size": 7936 00:22:54.721 }, 00:22:54.721 { 00:22:54.721 "name": "BaseBdev2", 00:22:54.721 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:22:54.721 "is_configured": true, 00:22:54.721 "data_offset": 256, 00:22:54.721 "data_size": 7936 00:22:54.721 } 00:22:54.721 ] 00:22:54.721 }' 00:22:54.721 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:54.721 07:19:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.288 [2024-11-20 07:19:52.436223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.288 [2024-11-20 07:19:52.539840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:55.288 "name": "raid_bdev1", 00:22:55.288 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:22:55.288 "strip_size_kb": 0, 00:22:55.288 "state": "online", 00:22:55.288 "raid_level": "raid1", 00:22:55.288 "superblock": true, 00:22:55.288 "num_base_bdevs": 2, 00:22:55.288 "num_base_bdevs_discovered": 1, 00:22:55.288 "num_base_bdevs_operational": 1, 00:22:55.288 "base_bdevs_list": [ 00:22:55.288 { 00:22:55.288 "name": null, 00:22:55.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.288 "is_configured": false, 00:22:55.288 "data_offset": 0, 00:22:55.288 "data_size": 7936 00:22:55.288 }, 00:22:55.288 { 00:22:55.288 "name": "BaseBdev2", 00:22:55.288 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:22:55.288 "is_configured": true, 00:22:55.288 "data_offset": 256, 00:22:55.288 "data_size": 7936 00:22:55.288 } 00:22:55.288 ] 00:22:55.288 }' 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:55.288 07:19:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.855 07:19:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:55.855 07:19:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.855 07:19:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:55.855 [2024-11-20 07:19:53.104052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:55.855 [2024-11-20 07:19:53.120691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:55.855 07:19:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.855 07:19:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:22:55.855 [2024-11-20 07:19:53.123196] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:57.232 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:57.232 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:57.232 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:57.232 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:57.232 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:57.232 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.232 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.232 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.232 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:57.232 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.232 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:57.232 "name": "raid_bdev1", 00:22:57.232 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:22:57.232 "strip_size_kb": 0, 00:22:57.232 "state": "online", 00:22:57.232 "raid_level": "raid1", 00:22:57.232 "superblock": true, 00:22:57.232 "num_base_bdevs": 2, 00:22:57.232 "num_base_bdevs_discovered": 2, 00:22:57.232 "num_base_bdevs_operational": 2, 00:22:57.232 "process": { 00:22:57.232 "type": "rebuild", 00:22:57.232 "target": "spare", 00:22:57.232 "progress": { 00:22:57.232 "blocks": 2560, 00:22:57.232 "percent": 32 00:22:57.232 } 00:22:57.232 }, 00:22:57.232 "base_bdevs_list": [ 00:22:57.232 { 00:22:57.232 "name": "spare", 00:22:57.232 "uuid": "5dcf643c-fd61-524e-9424-d5ebe6e38b9c", 00:22:57.232 "is_configured": true, 00:22:57.232 "data_offset": 256, 00:22:57.232 "data_size": 7936 00:22:57.232 }, 00:22:57.232 { 00:22:57.232 "name": "BaseBdev2", 00:22:57.232 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:22:57.232 "is_configured": true, 00:22:57.232 "data_offset": 256, 00:22:57.232 "data_size": 7936 00:22:57.232 } 00:22:57.232 ] 00:22:57.232 }' 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:57.233 [2024-11-20 07:19:54.264245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:57.233 [2024-11-20 07:19:54.332233] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:57.233 [2024-11-20 07:19:54.332589] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:57.233 [2024-11-20 07:19:54.332620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:57.233 [2024-11-20 07:19:54.332636] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:22:57.233 "name": "raid_bdev1", 00:22:57.233 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:22:57.233 "strip_size_kb": 0, 00:22:57.233 "state": "online", 00:22:57.233 "raid_level": "raid1", 00:22:57.233 "superblock": true, 00:22:57.233 "num_base_bdevs": 2, 00:22:57.233 "num_base_bdevs_discovered": 1, 00:22:57.233 "num_base_bdevs_operational": 1, 00:22:57.233 "base_bdevs_list": [ 00:22:57.233 { 00:22:57.233 "name": null, 00:22:57.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.233 "is_configured": false, 00:22:57.233 "data_offset": 0, 00:22:57.233 "data_size": 7936 00:22:57.233 }, 00:22:57.233 { 00:22:57.233 "name": "BaseBdev2", 00:22:57.233 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:22:57.233 "is_configured": true, 00:22:57.233 "data_offset": 256, 00:22:57.233 "data_size": 7936 00:22:57.233 } 00:22:57.233 ] 00:22:57.233 }' 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:22:57.233 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:57.801 "name": "raid_bdev1", 00:22:57.801 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:22:57.801 "strip_size_kb": 0, 00:22:57.801 "state": "online", 00:22:57.801 "raid_level": "raid1", 00:22:57.801 "superblock": true, 00:22:57.801 "num_base_bdevs": 2, 00:22:57.801 "num_base_bdevs_discovered": 1, 00:22:57.801 "num_base_bdevs_operational": 1, 00:22:57.801 "base_bdevs_list": [ 00:22:57.801 { 00:22:57.801 "name": null, 00:22:57.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.801 "is_configured": false, 00:22:57.801 "data_offset": 0, 00:22:57.801 "data_size": 7936 00:22:57.801 }, 00:22:57.801 { 00:22:57.801 "name": "BaseBdev2", 00:22:57.801 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:22:57.801 "is_configured": true, 00:22:57.801 "data_offset": 256, 00:22:57.801 "data_size": 7936 00:22:57.801 } 00:22:57.801 ] 00:22:57.801 }' 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:22:57.801 07:19:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:57.801 07:19:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:22:57.801 07:19:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:22:57.801 07:19:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.801 07:19:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:57.801 [2024-11-20 07:19:55.045394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:57.801 [2024-11-20 07:19:55.061645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:57.801 07:19:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.801 07:19:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:22:57.801 [2024-11-20 07:19:55.064110] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:59.178 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:59.178 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:59.178 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:59.178 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:59.178 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:59.178 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.178 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.178 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.178 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.178 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.178 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:59.178 "name": "raid_bdev1", 00:22:59.178 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:22:59.178 "strip_size_kb": 0, 00:22:59.178 "state": "online", 00:22:59.178 "raid_level": "raid1", 00:22:59.178 "superblock": true, 00:22:59.178 "num_base_bdevs": 2, 00:22:59.178 "num_base_bdevs_discovered": 2, 00:22:59.178 "num_base_bdevs_operational": 2, 00:22:59.178 "process": { 00:22:59.178 "type": "rebuild", 00:22:59.178 "target": "spare", 00:22:59.178 "progress": { 00:22:59.178 "blocks": 2560, 00:22:59.178 "percent": 32 00:22:59.178 } 00:22:59.178 }, 00:22:59.178 "base_bdevs_list": [ 00:22:59.178 { 00:22:59.178 "name": "spare", 00:22:59.178 "uuid": "5dcf643c-fd61-524e-9424-d5ebe6e38b9c", 00:22:59.178 "is_configured": true, 00:22:59.178 "data_offset": 256, 00:22:59.178 "data_size": 7936 00:22:59.178 }, 00:22:59.178 { 00:22:59.178 "name": "BaseBdev2", 00:22:59.178 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:22:59.179 "is_configured": true, 00:22:59.179 "data_offset": 256, 00:22:59.179 "data_size": 7936 00:22:59.179 } 00:22:59.179 ] 00:22:59.179 }' 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:22:59.179 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=801 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:22:59.179 "name": "raid_bdev1", 00:22:59.179 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:22:59.179 "strip_size_kb": 0, 00:22:59.179 "state": "online", 00:22:59.179 "raid_level": "raid1", 00:22:59.179 "superblock": true, 00:22:59.179 "num_base_bdevs": 2, 00:22:59.179 "num_base_bdevs_discovered": 2, 00:22:59.179 "num_base_bdevs_operational": 2, 00:22:59.179 "process": { 00:22:59.179 "type": "rebuild", 00:22:59.179 "target": "spare", 00:22:59.179 "progress": { 00:22:59.179 "blocks": 2816, 00:22:59.179 "percent": 35 00:22:59.179 } 00:22:59.179 }, 00:22:59.179 "base_bdevs_list": [ 00:22:59.179 { 00:22:59.179 "name": "spare", 00:22:59.179 "uuid": "5dcf643c-fd61-524e-9424-d5ebe6e38b9c", 00:22:59.179 "is_configured": true, 00:22:59.179 "data_offset": 256, 00:22:59.179 "data_size": 7936 00:22:59.179 }, 00:22:59.179 { 00:22:59.179 "name": "BaseBdev2", 00:22:59.179 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:22:59.179 "is_configured": true, 00:22:59.179 "data_offset": 256, 00:22:59.179 "data_size": 7936 00:22:59.179 } 00:22:59.179 ] 00:22:59.179 }' 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:22:59.179 07:19:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:00.116 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:00.116 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:00.116 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:00.116 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:00.116 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:00.116 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:00.116 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:00.116 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.116 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:00.116 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:00.116 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.374 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:00.374 "name": "raid_bdev1", 00:23:00.374 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:00.374 "strip_size_kb": 0, 00:23:00.374 "state": "online", 00:23:00.374 "raid_level": "raid1", 00:23:00.374 "superblock": true, 00:23:00.374 "num_base_bdevs": 2, 00:23:00.374 "num_base_bdevs_discovered": 2, 00:23:00.374 "num_base_bdevs_operational": 2, 00:23:00.374 "process": { 00:23:00.374 "type": "rebuild", 00:23:00.374 "target": "spare", 00:23:00.374 "progress": { 00:23:00.374 "blocks": 5888, 00:23:00.374 "percent": 74 00:23:00.374 } 00:23:00.374 }, 00:23:00.374 "base_bdevs_list": [ 00:23:00.374 { 00:23:00.374 "name": "spare", 00:23:00.374 "uuid": "5dcf643c-fd61-524e-9424-d5ebe6e38b9c", 00:23:00.374 "is_configured": true, 00:23:00.374 "data_offset": 256, 00:23:00.374 "data_size": 7936 00:23:00.374 }, 00:23:00.374 { 00:23:00.374 "name": "BaseBdev2", 00:23:00.374 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:00.374 "is_configured": true, 00:23:00.374 "data_offset": 256, 00:23:00.374 "data_size": 7936 00:23:00.374 } 00:23:00.374 ] 00:23:00.374 }' 00:23:00.374 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:00.374 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:00.374 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:00.374 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:00.374 07:19:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:23:00.941 [2024-11-20 07:19:58.186427] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:00.941 [2024-11-20 07:19:58.186534] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:00.941 [2024-11-20 07:19:58.186695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.577 "name": "raid_bdev1", 00:23:01.577 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:01.577 "strip_size_kb": 0, 00:23:01.577 "state": "online", 00:23:01.577 "raid_level": "raid1", 00:23:01.577 "superblock": true, 00:23:01.577 "num_base_bdevs": 2, 00:23:01.577 "num_base_bdevs_discovered": 2, 00:23:01.577 "num_base_bdevs_operational": 2, 00:23:01.577 "base_bdevs_list": [ 00:23:01.577 { 00:23:01.577 "name": "spare", 00:23:01.577 "uuid": "5dcf643c-fd61-524e-9424-d5ebe6e38b9c", 00:23:01.577 "is_configured": true, 00:23:01.577 "data_offset": 256, 00:23:01.577 "data_size": 7936 00:23:01.577 }, 00:23:01.577 { 00:23:01.577 "name": "BaseBdev2", 00:23:01.577 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:01.577 "is_configured": true, 00:23:01.577 "data_offset": 256, 00:23:01.577 "data_size": 7936 00:23:01.577 } 00:23:01.577 ] 00:23:01.577 }' 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:01.577 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:01.578 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:01.578 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:01.578 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.578 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.578 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.578 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.578 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.578 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:01.578 "name": "raid_bdev1", 00:23:01.578 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:01.578 "strip_size_kb": 0, 00:23:01.578 "state": "online", 00:23:01.578 "raid_level": "raid1", 00:23:01.578 "superblock": true, 00:23:01.578 "num_base_bdevs": 2, 00:23:01.578 "num_base_bdevs_discovered": 2, 00:23:01.578 "num_base_bdevs_operational": 2, 00:23:01.578 "base_bdevs_list": [ 00:23:01.578 { 00:23:01.578 "name": "spare", 00:23:01.578 "uuid": "5dcf643c-fd61-524e-9424-d5ebe6e38b9c", 00:23:01.578 "is_configured": true, 00:23:01.578 "data_offset": 256, 00:23:01.578 "data_size": 7936 00:23:01.578 }, 00:23:01.578 { 00:23:01.578 "name": "BaseBdev2", 00:23:01.578 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:01.578 "is_configured": true, 00:23:01.578 "data_offset": 256, 00:23:01.578 "data_size": 7936 00:23:01.578 } 00:23:01.578 ] 00:23:01.578 }' 00:23:01.578 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:01.578 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:01.578 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:01.836 "name": "raid_bdev1", 00:23:01.836 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:01.836 "strip_size_kb": 0, 00:23:01.836 "state": "online", 00:23:01.836 "raid_level": "raid1", 00:23:01.836 "superblock": true, 00:23:01.836 "num_base_bdevs": 2, 00:23:01.836 "num_base_bdevs_discovered": 2, 00:23:01.836 "num_base_bdevs_operational": 2, 00:23:01.836 "base_bdevs_list": [ 00:23:01.836 { 00:23:01.836 "name": "spare", 00:23:01.836 "uuid": "5dcf643c-fd61-524e-9424-d5ebe6e38b9c", 00:23:01.836 "is_configured": true, 00:23:01.836 "data_offset": 256, 00:23:01.836 "data_size": 7936 00:23:01.836 }, 00:23:01.836 { 00:23:01.836 "name": "BaseBdev2", 00:23:01.836 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:01.836 "is_configured": true, 00:23:01.836 "data_offset": 256, 00:23:01.836 "data_size": 7936 00:23:01.836 } 00:23:01.836 ] 00:23:01.836 }' 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:01.836 07:19:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.400 [2024-11-20 07:19:59.442616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:02.400 [2024-11-20 07:19:59.442802] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:02.400 [2024-11-20 07:19:59.442951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:02.400 [2024-11-20 07:19:59.443049] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:02.400 [2024-11-20 07:19:59.443068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.400 [2024-11-20 07:19:59.514622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:02.400 [2024-11-20 07:19:59.514699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:02.400 [2024-11-20 07:19:59.514735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:02.400 [2024-11-20 07:19:59.514750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:02.400 [2024-11-20 07:19:59.517352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:02.400 [2024-11-20 07:19:59.517396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:02.400 [2024-11-20 07:19:59.517482] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:02.400 [2024-11-20 07:19:59.517559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:02.400 [2024-11-20 07:19:59.517709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:02.400 spare 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.400 [2024-11-20 07:19:59.617840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:23:02.400 [2024-11-20 07:19:59.617912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:23:02.400 [2024-11-20 07:19:59.618085] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:02.400 [2024-11-20 07:19:59.618235] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:23:02.400 [2024-11-20 07:19:59.618254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:23:02.400 [2024-11-20 07:19:59.618388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:02.400 "name": "raid_bdev1", 00:23:02.400 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:02.400 "strip_size_kb": 0, 00:23:02.400 "state": "online", 00:23:02.400 "raid_level": "raid1", 00:23:02.400 "superblock": true, 00:23:02.400 "num_base_bdevs": 2, 00:23:02.400 "num_base_bdevs_discovered": 2, 00:23:02.400 "num_base_bdevs_operational": 2, 00:23:02.400 "base_bdevs_list": [ 00:23:02.400 { 00:23:02.400 "name": "spare", 00:23:02.400 "uuid": "5dcf643c-fd61-524e-9424-d5ebe6e38b9c", 00:23:02.400 "is_configured": true, 00:23:02.400 "data_offset": 256, 00:23:02.400 "data_size": 7936 00:23:02.400 }, 00:23:02.400 { 00:23:02.400 "name": "BaseBdev2", 00:23:02.400 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:02.400 "is_configured": true, 00:23:02.400 "data_offset": 256, 00:23:02.400 "data_size": 7936 00:23:02.400 } 00:23:02.400 ] 00:23:02.400 }' 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:02.400 07:19:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:02.967 "name": "raid_bdev1", 00:23:02.967 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:02.967 "strip_size_kb": 0, 00:23:02.967 "state": "online", 00:23:02.967 "raid_level": "raid1", 00:23:02.967 "superblock": true, 00:23:02.967 "num_base_bdevs": 2, 00:23:02.967 "num_base_bdevs_discovered": 2, 00:23:02.967 "num_base_bdevs_operational": 2, 00:23:02.967 "base_bdevs_list": [ 00:23:02.967 { 00:23:02.967 "name": "spare", 00:23:02.967 "uuid": "5dcf643c-fd61-524e-9424-d5ebe6e38b9c", 00:23:02.967 "is_configured": true, 00:23:02.967 "data_offset": 256, 00:23:02.967 "data_size": 7936 00:23:02.967 }, 00:23:02.967 { 00:23:02.967 "name": "BaseBdev2", 00:23:02.967 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:02.967 "is_configured": true, 00:23:02.967 "data_offset": 256, 00:23:02.967 "data_size": 7936 00:23:02.967 } 00:23:02.967 ] 00:23:02.967 }' 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:02.967 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.225 [2024-11-20 07:20:00.359025] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.225 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:03.225 "name": "raid_bdev1", 00:23:03.226 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:03.226 "strip_size_kb": 0, 00:23:03.226 "state": "online", 00:23:03.226 "raid_level": "raid1", 00:23:03.226 "superblock": true, 00:23:03.226 "num_base_bdevs": 2, 00:23:03.226 "num_base_bdevs_discovered": 1, 00:23:03.226 "num_base_bdevs_operational": 1, 00:23:03.226 "base_bdevs_list": [ 00:23:03.226 { 00:23:03.226 "name": null, 00:23:03.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.226 "is_configured": false, 00:23:03.226 "data_offset": 0, 00:23:03.226 "data_size": 7936 00:23:03.226 }, 00:23:03.226 { 00:23:03.226 "name": "BaseBdev2", 00:23:03.226 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:03.226 "is_configured": true, 00:23:03.226 "data_offset": 256, 00:23:03.226 "data_size": 7936 00:23:03.226 } 00:23:03.226 ] 00:23:03.226 }' 00:23:03.226 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:03.226 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.794 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:23:03.794 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.794 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:03.794 [2024-11-20 07:20:00.875151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:03.794 [2024-11-20 07:20:00.875403] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:03.794 [2024-11-20 07:20:00.875428] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:03.794 [2024-11-20 07:20:00.875495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:03.794 [2024-11-20 07:20:00.891342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:23:03.794 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.794 07:20:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:23:03.794 [2024-11-20 07:20:00.894109] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:04.727 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:04.727 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:04.727 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:04.727 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:04.727 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:04.727 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.727 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.727 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.727 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.727 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.727 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:04.727 "name": "raid_bdev1", 00:23:04.727 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:04.727 "strip_size_kb": 0, 00:23:04.727 "state": "online", 00:23:04.727 "raid_level": "raid1", 00:23:04.727 "superblock": true, 00:23:04.727 "num_base_bdevs": 2, 00:23:04.727 "num_base_bdevs_discovered": 2, 00:23:04.727 "num_base_bdevs_operational": 2, 00:23:04.727 "process": { 00:23:04.727 "type": "rebuild", 00:23:04.727 "target": "spare", 00:23:04.727 "progress": { 00:23:04.727 "blocks": 2560, 00:23:04.727 "percent": 32 00:23:04.727 } 00:23:04.727 }, 00:23:04.727 "base_bdevs_list": [ 00:23:04.727 { 00:23:04.727 "name": "spare", 00:23:04.727 "uuid": "5dcf643c-fd61-524e-9424-d5ebe6e38b9c", 00:23:04.727 "is_configured": true, 00:23:04.727 "data_offset": 256, 00:23:04.727 "data_size": 7936 00:23:04.727 }, 00:23:04.727 { 00:23:04.728 "name": "BaseBdev2", 00:23:04.728 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:04.728 "is_configured": true, 00:23:04.728 "data_offset": 256, 00:23:04.728 "data_size": 7936 00:23:04.728 } 00:23:04.728 ] 00:23:04.728 }' 00:23:04.728 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:04.728 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:04.728 07:20:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:04.728 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:04.728 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:23:04.728 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.728 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.986 [2024-11-20 07:20:02.047225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:04.986 [2024-11-20 07:20:02.103228] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:04.986 [2024-11-20 07:20:02.103688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:04.986 [2024-11-20 07:20:02.103717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:04.986 [2024-11-20 07:20:02.103737] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:04.986 "name": "raid_bdev1", 00:23:04.986 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:04.986 "strip_size_kb": 0, 00:23:04.986 "state": "online", 00:23:04.986 "raid_level": "raid1", 00:23:04.986 "superblock": true, 00:23:04.986 "num_base_bdevs": 2, 00:23:04.986 "num_base_bdevs_discovered": 1, 00:23:04.986 "num_base_bdevs_operational": 1, 00:23:04.986 "base_bdevs_list": [ 00:23:04.986 { 00:23:04.986 "name": null, 00:23:04.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:04.986 "is_configured": false, 00:23:04.986 "data_offset": 0, 00:23:04.986 "data_size": 7936 00:23:04.986 }, 00:23:04.986 { 00:23:04.986 "name": "BaseBdev2", 00:23:04.986 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:04.986 "is_configured": true, 00:23:04.986 "data_offset": 256, 00:23:04.986 "data_size": 7936 00:23:04.986 } 00:23:04.986 ] 00:23:04.986 }' 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:04.986 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:05.591 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:23:05.591 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.591 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:05.591 [2024-11-20 07:20:02.668303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:05.591 [2024-11-20 07:20:02.668384] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.591 [2024-11-20 07:20:02.668420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:05.591 [2024-11-20 07:20:02.668438] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.591 [2024-11-20 07:20:02.668697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.591 [2024-11-20 07:20:02.668725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:05.591 [2024-11-20 07:20:02.668796] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:23:05.591 [2024-11-20 07:20:02.668819] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:23:05.591 [2024-11-20 07:20:02.668832] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:23:05.591 [2024-11-20 07:20:02.668869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:05.591 [2024-11-20 07:20:02.684689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:05.591 spare 00:23:05.591 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.591 07:20:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:23:05.591 [2024-11-20 07:20:02.687135] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:06.526 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:06.526 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:06.526 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:23:06.526 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:23:06.526 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:06.526 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.526 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.526 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.526 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.526 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.526 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:06.526 "name": "raid_bdev1", 00:23:06.526 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:06.526 "strip_size_kb": 0, 00:23:06.526 "state": "online", 00:23:06.526 "raid_level": "raid1", 00:23:06.526 "superblock": true, 00:23:06.526 "num_base_bdevs": 2, 00:23:06.526 "num_base_bdevs_discovered": 2, 00:23:06.526 "num_base_bdevs_operational": 2, 00:23:06.526 "process": { 00:23:06.526 "type": "rebuild", 00:23:06.526 "target": "spare", 00:23:06.526 "progress": { 00:23:06.526 "blocks": 2560, 00:23:06.526 "percent": 32 00:23:06.526 } 00:23:06.526 }, 00:23:06.526 "base_bdevs_list": [ 00:23:06.526 { 00:23:06.526 "name": "spare", 00:23:06.526 "uuid": "5dcf643c-fd61-524e-9424-d5ebe6e38b9c", 00:23:06.527 "is_configured": true, 00:23:06.527 "data_offset": 256, 00:23:06.527 "data_size": 7936 00:23:06.527 }, 00:23:06.527 { 00:23:06.527 "name": "BaseBdev2", 00:23:06.527 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:06.527 "is_configured": true, 00:23:06.527 "data_offset": 256, 00:23:06.527 "data_size": 7936 00:23:06.527 } 00:23:06.527 ] 00:23:06.527 }' 00:23:06.527 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:06.527 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:06.527 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.785 [2024-11-20 07:20:03.872848] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:06.785 [2024-11-20 07:20:03.896094] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:06.785 [2024-11-20 07:20:03.896208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:06.785 [2024-11-20 07:20:03.896240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:06.785 [2024-11-20 07:20:03.896252] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:06.785 "name": "raid_bdev1", 00:23:06.785 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:06.785 "strip_size_kb": 0, 00:23:06.785 "state": "online", 00:23:06.785 "raid_level": "raid1", 00:23:06.785 "superblock": true, 00:23:06.785 "num_base_bdevs": 2, 00:23:06.785 "num_base_bdevs_discovered": 1, 00:23:06.785 "num_base_bdevs_operational": 1, 00:23:06.785 "base_bdevs_list": [ 00:23:06.785 { 00:23:06.785 "name": null, 00:23:06.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.785 "is_configured": false, 00:23:06.785 "data_offset": 0, 00:23:06.785 "data_size": 7936 00:23:06.785 }, 00:23:06.785 { 00:23:06.785 "name": "BaseBdev2", 00:23:06.785 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:06.785 "is_configured": true, 00:23:06.785 "data_offset": 256, 00:23:06.785 "data_size": 7936 00:23:06.785 } 00:23:06.785 ] 00:23:06.785 }' 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:06.785 07:20:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:07.353 "name": "raid_bdev1", 00:23:07.353 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:07.353 "strip_size_kb": 0, 00:23:07.353 "state": "online", 00:23:07.353 "raid_level": "raid1", 00:23:07.353 "superblock": true, 00:23:07.353 "num_base_bdevs": 2, 00:23:07.353 "num_base_bdevs_discovered": 1, 00:23:07.353 "num_base_bdevs_operational": 1, 00:23:07.353 "base_bdevs_list": [ 00:23:07.353 { 00:23:07.353 "name": null, 00:23:07.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.353 "is_configured": false, 00:23:07.353 "data_offset": 0, 00:23:07.353 "data_size": 7936 00:23:07.353 }, 00:23:07.353 { 00:23:07.353 "name": "BaseBdev2", 00:23:07.353 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:07.353 "is_configured": true, 00:23:07.353 "data_offset": 256, 00:23:07.353 "data_size": 7936 00:23:07.353 } 00:23:07.353 ] 00:23:07.353 }' 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:07.353 [2024-11-20 07:20:04.660560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:07.353 [2024-11-20 07:20:04.660781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.353 [2024-11-20 07:20:04.660828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:07.353 [2024-11-20 07:20:04.660845] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.353 [2024-11-20 07:20:04.661060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.353 [2024-11-20 07:20:04.661083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:07.353 [2024-11-20 07:20:04.661153] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:23:07.353 [2024-11-20 07:20:04.661173] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:07.353 [2024-11-20 07:20:04.661187] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:07.353 [2024-11-20 07:20:04.661200] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:23:07.353 BaseBdev1 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.353 07:20:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:08.730 "name": "raid_bdev1", 00:23:08.730 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:08.730 "strip_size_kb": 0, 00:23:08.730 "state": "online", 00:23:08.730 "raid_level": "raid1", 00:23:08.730 "superblock": true, 00:23:08.730 "num_base_bdevs": 2, 00:23:08.730 "num_base_bdevs_discovered": 1, 00:23:08.730 "num_base_bdevs_operational": 1, 00:23:08.730 "base_bdevs_list": [ 00:23:08.730 { 00:23:08.730 "name": null, 00:23:08.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.730 "is_configured": false, 00:23:08.730 "data_offset": 0, 00:23:08.730 "data_size": 7936 00:23:08.730 }, 00:23:08.730 { 00:23:08.730 "name": "BaseBdev2", 00:23:08.730 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:08.730 "is_configured": true, 00:23:08.730 "data_offset": 256, 00:23:08.730 "data_size": 7936 00:23:08.730 } 00:23:08.730 ] 00:23:08.730 }' 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:08.730 07:20:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.988 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:08.988 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:08.988 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:08.988 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:08.988 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:08.988 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:08.988 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:08.988 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.988 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:08.988 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.988 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:08.988 "name": "raid_bdev1", 00:23:08.988 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:08.988 "strip_size_kb": 0, 00:23:08.988 "state": "online", 00:23:08.988 "raid_level": "raid1", 00:23:08.988 "superblock": true, 00:23:08.988 "num_base_bdevs": 2, 00:23:08.988 "num_base_bdevs_discovered": 1, 00:23:08.988 "num_base_bdevs_operational": 1, 00:23:08.988 "base_bdevs_list": [ 00:23:08.988 { 00:23:08.988 "name": null, 00:23:08.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.988 "is_configured": false, 00:23:08.988 "data_offset": 0, 00:23:08.988 "data_size": 7936 00:23:08.988 }, 00:23:08.988 { 00:23:08.988 "name": "BaseBdev2", 00:23:08.988 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:08.988 "is_configured": true, 00:23:08.988 "data_offset": 256, 00:23:08.988 "data_size": 7936 00:23:08.988 } 00:23:08.988 ] 00:23:08.988 }' 00:23:08.988 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:09.265 [2024-11-20 07:20:06.365123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:09.265 [2024-11-20 07:20:06.365321] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:23:09.265 [2024-11-20 07:20:06.365349] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:23:09.265 request: 00:23:09.265 { 00:23:09.265 "base_bdev": "BaseBdev1", 00:23:09.265 "raid_bdev": "raid_bdev1", 00:23:09.265 "method": "bdev_raid_add_base_bdev", 00:23:09.265 "req_id": 1 00:23:09.265 } 00:23:09.265 Got JSON-RPC error response 00:23:09.265 response: 00:23:09.265 { 00:23:09.265 "code": -22, 00:23:09.265 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:23:09.265 } 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:09.265 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:09.266 07:20:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:23:10.244 "name": "raid_bdev1", 00:23:10.244 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:10.244 "strip_size_kb": 0, 00:23:10.244 "state": "online", 00:23:10.244 "raid_level": "raid1", 00:23:10.244 "superblock": true, 00:23:10.244 "num_base_bdevs": 2, 00:23:10.244 "num_base_bdevs_discovered": 1, 00:23:10.244 "num_base_bdevs_operational": 1, 00:23:10.244 "base_bdevs_list": [ 00:23:10.244 { 00:23:10.244 "name": null, 00:23:10.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.244 "is_configured": false, 00:23:10.244 "data_offset": 0, 00:23:10.244 "data_size": 7936 00:23:10.244 }, 00:23:10.244 { 00:23:10.244 "name": "BaseBdev2", 00:23:10.244 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:10.244 "is_configured": true, 00:23:10.244 "data_offset": 256, 00:23:10.244 "data_size": 7936 00:23:10.244 } 00:23:10.244 ] 00:23:10.244 }' 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:23:10.244 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.810 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:10.810 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:23:10.810 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:23:10.810 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:23:10.810 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:23:10.810 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:23:10.810 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.810 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:10.810 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:10.810 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.810 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:23:10.810 "name": "raid_bdev1", 00:23:10.810 "uuid": "6a5d972e-ba77-40ef-b3f9-377d3e5b58a1", 00:23:10.810 "strip_size_kb": 0, 00:23:10.810 "state": "online", 00:23:10.810 "raid_level": "raid1", 00:23:10.810 "superblock": true, 00:23:10.810 "num_base_bdevs": 2, 00:23:10.810 "num_base_bdevs_discovered": 1, 00:23:10.810 "num_base_bdevs_operational": 1, 00:23:10.810 "base_bdevs_list": [ 00:23:10.810 { 00:23:10.810 "name": null, 00:23:10.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.811 "is_configured": false, 00:23:10.811 "data_offset": 0, 00:23:10.811 "data_size": 7936 00:23:10.811 }, 00:23:10.811 { 00:23:10.811 "name": "BaseBdev2", 00:23:10.811 "uuid": "ed30c16c-0e04-58f5-a266-f6c60ab8327d", 00:23:10.811 "is_configured": true, 00:23:10.811 "data_offset": 256, 00:23:10.811 "data_size": 7936 00:23:10.811 } 00:23:10.811 ] 00:23:10.811 }' 00:23:10.811 07:20:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89427 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89427 ']' 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89427 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89427 00:23:10.811 killing process with pid 89427 00:23:10.811 Received shutdown signal, test time was about 60.000000 seconds 00:23:10.811 00:23:10.811 Latency(us) 00:23:10.811 [2024-11-20T07:20:08.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.811 [2024-11-20T07:20:08.131Z] =================================================================================================================== 00:23:10.811 [2024-11-20T07:20:08.131Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89427' 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89427 00:23:10.811 [2024-11-20 07:20:08.087593] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:10.811 07:20:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89427 00:23:10.811 [2024-11-20 07:20:08.087748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:10.811 [2024-11-20 07:20:08.087814] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:10.811 [2024-11-20 07:20:08.087833] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:23:11.069 [2024-11-20 07:20:08.366498] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:12.445 07:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:23:12.445 ************************************ 00:23:12.445 END TEST raid_rebuild_test_sb_md_interleaved 00:23:12.445 ************************************ 00:23:12.445 00:23:12.445 real 0m18.761s 00:23:12.445 user 0m25.709s 00:23:12.445 sys 0m1.419s 00:23:12.445 07:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.445 07:20:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:23:12.445 07:20:09 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:23:12.445 07:20:09 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:23:12.445 07:20:09 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89427 ']' 00:23:12.445 07:20:09 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89427 00:23:12.445 07:20:09 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:23:12.445 ************************************ 00:23:12.445 END TEST bdev_raid 00:23:12.445 ************************************ 00:23:12.445 00:23:12.445 real 13m4.113s 00:23:12.445 user 18m28.651s 00:23:12.445 sys 1m45.897s 00:23:12.445 07:20:09 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.445 07:20:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:12.445 07:20:09 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:12.445 07:20:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:12.445 07:20:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:12.445 07:20:09 -- common/autotest_common.sh@10 -- # set +x 00:23:12.445 ************************************ 00:23:12.445 START TEST spdkcli_raid 00:23:12.445 ************************************ 00:23:12.445 07:20:09 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:12.445 * Looking for test storage... 00:23:12.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:12.445 07:20:09 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:12.445 07:20:09 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:23:12.445 07:20:09 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:12.445 07:20:09 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:12.445 07:20:09 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:23:12.445 07:20:09 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.445 07:20:09 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:12.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.445 --rc genhtml_branch_coverage=1 00:23:12.445 --rc genhtml_function_coverage=1 00:23:12.445 --rc genhtml_legend=1 00:23:12.445 --rc geninfo_all_blocks=1 00:23:12.445 --rc geninfo_unexecuted_blocks=1 00:23:12.445 00:23:12.445 ' 00:23:12.445 07:20:09 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:12.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.445 --rc genhtml_branch_coverage=1 00:23:12.445 --rc genhtml_function_coverage=1 00:23:12.445 --rc genhtml_legend=1 00:23:12.445 --rc geninfo_all_blocks=1 00:23:12.445 --rc geninfo_unexecuted_blocks=1 00:23:12.445 00:23:12.445 ' 00:23:12.445 07:20:09 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:12.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.445 --rc genhtml_branch_coverage=1 00:23:12.445 --rc genhtml_function_coverage=1 00:23:12.445 --rc genhtml_legend=1 00:23:12.445 --rc geninfo_all_blocks=1 00:23:12.445 --rc geninfo_unexecuted_blocks=1 00:23:12.445 00:23:12.445 ' 00:23:12.445 07:20:09 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:12.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.445 --rc genhtml_branch_coverage=1 00:23:12.445 --rc genhtml_function_coverage=1 00:23:12.445 --rc genhtml_legend=1 00:23:12.445 --rc geninfo_all_blocks=1 00:23:12.445 --rc geninfo_unexecuted_blocks=1 00:23:12.445 00:23:12.445 ' 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:23:12.445 07:20:09 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:23:12.445 07:20:09 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:23:12.446 07:20:09 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.446 07:20:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:12.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.446 07:20:09 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:23:12.446 07:20:09 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90104 00:23:12.446 07:20:09 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:23:12.446 07:20:09 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90104 00:23:12.446 07:20:09 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90104 ']' 00:23:12.446 07:20:09 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.446 07:20:09 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.446 07:20:09 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.446 07:20:09 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.446 07:20:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:12.704 [2024-11-20 07:20:09.839891] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:23:12.704 [2024-11-20 07:20:09.840159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90104 ] 00:23:12.962 [2024-11-20 07:20:10.034178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:12.962 [2024-11-20 07:20:10.238240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.962 [2024-11-20 07:20:10.238240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.898 07:20:11 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.898 07:20:11 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:23:13.898 07:20:11 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:23:13.898 07:20:11 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:13.898 07:20:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:13.898 07:20:11 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:23:13.898 07:20:11 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.898 07:20:11 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:13.898 07:20:11 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:23:13.898 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:23:13.898 ' 00:23:15.802 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:23:15.802 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:23:15.802 07:20:12 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:23:15.802 07:20:12 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:15.802 07:20:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:15.802 07:20:12 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:23:15.802 07:20:12 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:15.802 07:20:12 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:15.802 07:20:12 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:23:15.802 ' 00:23:16.737 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:23:16.996 07:20:14 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:23:16.996 07:20:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.996 07:20:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:16.996 07:20:14 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:23:16.996 07:20:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:16.996 07:20:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:16.996 07:20:14 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:23:16.996 07:20:14 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:23:17.564 07:20:14 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:23:17.564 07:20:14 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:23:17.564 07:20:14 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:23:17.564 07:20:14 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:17.564 07:20:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:17.564 07:20:14 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:23:17.564 07:20:14 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.564 07:20:14 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:17.564 07:20:14 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:23:17.564 ' 00:23:18.944 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:23:18.944 07:20:15 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:23:18.944 07:20:15 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.944 07:20:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:18.944 07:20:15 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:23:18.944 07:20:15 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.944 07:20:15 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:18.944 07:20:15 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:23:18.944 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:23:18.944 ' 00:23:20.322 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:23:20.322 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:23:20.322 07:20:17 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:23:20.322 07:20:17 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.322 07:20:17 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:20.322 07:20:17 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90104 00:23:20.322 07:20:17 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90104 ']' 00:23:20.322 07:20:17 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90104 00:23:20.322 07:20:17 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:23:20.322 07:20:17 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.322 07:20:17 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90104 00:23:20.322 killing process with pid 90104 00:23:20.322 07:20:17 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.322 07:20:17 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.322 07:20:17 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90104' 00:23:20.322 07:20:17 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90104 00:23:20.322 07:20:17 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90104 00:23:22.855 07:20:19 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:23:22.855 07:20:19 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90104 ']' 00:23:22.855 07:20:19 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90104 00:23:22.855 07:20:19 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90104 ']' 00:23:22.855 07:20:19 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90104 00:23:22.855 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90104) - No such process 00:23:22.855 Process with pid 90104 is not found 00:23:22.855 07:20:19 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90104 is not found' 00:23:22.855 07:20:19 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:23:22.855 07:20:19 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:23:22.855 07:20:19 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:23:22.855 07:20:19 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:23:22.855 00:23:22.855 real 0m10.316s 00:23:22.855 user 0m21.388s 00:23:22.855 sys 0m1.098s 00:23:22.855 07:20:19 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:22.855 07:20:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:23:22.855 ************************************ 00:23:22.855 END TEST spdkcli_raid 00:23:22.855 ************************************ 00:23:22.855 07:20:19 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:23:22.855 07:20:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:22.855 07:20:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:22.855 07:20:19 -- common/autotest_common.sh@10 -- # set +x 00:23:22.855 ************************************ 00:23:22.855 START TEST blockdev_raid5f 00:23:22.855 ************************************ 00:23:22.855 07:20:19 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:23:22.855 * Looking for test storage... 00:23:22.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:23:22.855 07:20:19 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:22.855 07:20:19 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:23:22.855 07:20:19 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:22.855 07:20:20 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:22.855 07:20:20 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:23:22.855 07:20:20 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:22.855 07:20:20 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:22.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.856 --rc genhtml_branch_coverage=1 00:23:22.856 --rc genhtml_function_coverage=1 00:23:22.856 --rc genhtml_legend=1 00:23:22.856 --rc geninfo_all_blocks=1 00:23:22.856 --rc geninfo_unexecuted_blocks=1 00:23:22.856 00:23:22.856 ' 00:23:22.856 07:20:20 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:22.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.856 --rc genhtml_branch_coverage=1 00:23:22.856 --rc genhtml_function_coverage=1 00:23:22.856 --rc genhtml_legend=1 00:23:22.856 --rc geninfo_all_blocks=1 00:23:22.856 --rc geninfo_unexecuted_blocks=1 00:23:22.856 00:23:22.856 ' 00:23:22.856 07:20:20 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:22.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.856 --rc genhtml_branch_coverage=1 00:23:22.856 --rc genhtml_function_coverage=1 00:23:22.856 --rc genhtml_legend=1 00:23:22.856 --rc geninfo_all_blocks=1 00:23:22.856 --rc geninfo_unexecuted_blocks=1 00:23:22.856 00:23:22.856 ' 00:23:22.856 07:20:20 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:22.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.856 --rc genhtml_branch_coverage=1 00:23:22.856 --rc genhtml_function_coverage=1 00:23:22.856 --rc genhtml_legend=1 00:23:22.856 --rc geninfo_all_blocks=1 00:23:22.856 --rc geninfo_unexecuted_blocks=1 00:23:22.856 00:23:22.856 ' 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90379 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:22.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.856 07:20:20 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90379 00:23:22.856 07:20:20 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90379 ']' 00:23:22.856 07:20:20 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.856 07:20:20 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.856 07:20:20 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.856 07:20:20 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.856 07:20:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:23.115 [2024-11-20 07:20:20.185632] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:23:23.115 [2024-11-20 07:20:20.185814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90379 ] 00:23:23.115 [2024-11-20 07:20:20.377212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.374 [2024-11-20 07:20:20.540265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.309 07:20:21 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:23:24.310 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:23:24.310 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:23:24.310 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:24.310 Malloc0 00:23:24.310 Malloc1 00:23:24.310 Malloc2 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.310 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.310 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:23:24.310 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.310 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.310 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.310 07:20:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:24.568 07:20:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.568 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:23:24.568 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:23:24.568 07:20:21 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.568 07:20:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:24.568 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:23:24.568 07:20:21 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.568 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:23:24.569 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "44d0f721-58d6-4a9c-996c-92135dd1263a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "44d0f721-58d6-4a9c-996c-92135dd1263a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "44d0f721-58d6-4a9c-996c-92135dd1263a",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "fa9dbcce-efa3-4051-976a-cc30ca863dc3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "228f5032-b2ef-411d-bfd6-9da076ea11c3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "9219b7dd-a6a9-4f42-ad17-0d1be66d5396",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:23:24.569 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:23:24.569 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:23:24.569 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:23:24.569 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:23:24.569 07:20:21 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90379 00:23:24.569 07:20:21 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90379 ']' 00:23:24.569 07:20:21 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90379 00:23:24.569 07:20:21 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:23:24.569 07:20:21 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.569 07:20:21 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90379 00:23:24.569 killing process with pid 90379 00:23:24.569 07:20:21 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.569 07:20:21 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.569 07:20:21 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90379' 00:23:24.569 07:20:21 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90379 00:23:24.569 07:20:21 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90379 00:23:27.107 07:20:24 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:27.107 07:20:24 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:23:27.107 07:20:24 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:27.107 07:20:24 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.107 07:20:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:27.107 ************************************ 00:23:27.107 START TEST bdev_hello_world 00:23:27.107 ************************************ 00:23:27.107 07:20:24 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:23:27.107 [2024-11-20 07:20:24.323916] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:23:27.107 [2024-11-20 07:20:24.324091] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90452 ] 00:23:27.366 [2024-11-20 07:20:24.511457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.366 [2024-11-20 07:20:24.643703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.932 [2024-11-20 07:20:25.166314] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:23:27.932 [2024-11-20 07:20:25.166380] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:23:27.932 [2024-11-20 07:20:25.166405] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:23:27.932 [2024-11-20 07:20:25.167003] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:23:27.932 [2024-11-20 07:20:25.167193] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:23:27.932 [2024-11-20 07:20:25.167241] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:23:27.932 [2024-11-20 07:20:25.167318] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:23:27.932 00:23:27.932 [2024-11-20 07:20:25.167356] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:23:29.308 ************************************ 00:23:29.308 END TEST bdev_hello_world 00:23:29.308 ************************************ 00:23:29.308 00:23:29.308 real 0m2.246s 00:23:29.308 user 0m1.815s 00:23:29.308 sys 0m0.305s 00:23:29.308 07:20:26 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.308 07:20:26 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:23:29.308 07:20:26 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:23:29.308 07:20:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:29.308 07:20:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.308 07:20:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:29.308 ************************************ 00:23:29.308 START TEST bdev_bounds 00:23:29.308 ************************************ 00:23:29.308 Process bdevio pid: 90494 00:23:29.308 07:20:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:23:29.308 07:20:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90494 00:23:29.308 07:20:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:23:29.308 07:20:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90494' 00:23:29.308 07:20:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90494 00:23:29.308 07:20:26 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:29.308 07:20:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90494 ']' 00:23:29.308 07:20:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.308 07:20:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.308 07:20:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.308 07:20:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.308 07:20:26 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:29.308 [2024-11-20 07:20:26.610079] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:23:29.308 [2024-11-20 07:20:26.610241] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90494 ] 00:23:29.566 [2024-11-20 07:20:26.795192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:29.826 [2024-11-20 07:20:26.957554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.826 [2024-11-20 07:20:26.957730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.826 [2024-11-20 07:20:26.957737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.392 07:20:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.392 07:20:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:23:30.392 07:20:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:23:30.392 I/O targets: 00:23:30.392 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:23:30.392 00:23:30.392 00:23:30.392 CUnit - A unit testing framework for C - Version 2.1-3 00:23:30.392 http://cunit.sourceforge.net/ 00:23:30.392 00:23:30.392 00:23:30.392 Suite: bdevio tests on: raid5f 00:23:30.392 Test: blockdev write read block ...passed 00:23:30.392 Test: blockdev write zeroes read block ...passed 00:23:30.650 Test: blockdev write zeroes read no split ...passed 00:23:30.650 Test: blockdev write zeroes read split ...passed 00:23:30.650 Test: blockdev write zeroes read split partial ...passed 00:23:30.650 Test: blockdev reset ...passed 00:23:30.650 Test: blockdev write read 8 blocks ...passed 00:23:30.650 Test: blockdev write read size > 128k ...passed 00:23:30.650 Test: blockdev write read invalid size ...passed 00:23:30.650 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:30.651 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:30.651 Test: blockdev write read max offset ...passed 00:23:30.651 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:30.651 Test: blockdev writev readv 8 blocks ...passed 00:23:30.651 Test: blockdev writev readv 30 x 1block ...passed 00:23:30.651 Test: blockdev writev readv block ...passed 00:23:30.651 Test: blockdev writev readv size > 128k ...passed 00:23:30.651 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:30.651 Test: blockdev comparev and writev ...passed 00:23:30.651 Test: blockdev nvme passthru rw ...passed 00:23:30.651 Test: blockdev nvme passthru vendor specific ...passed 00:23:30.651 Test: blockdev nvme admin passthru ...passed 00:23:30.651 Test: blockdev copy ...passed 00:23:30.651 00:23:30.651 Run Summary: Type Total Ran Passed Failed Inactive 00:23:30.651 suites 1 1 n/a 0 0 00:23:30.651 tests 23 23 23 0 0 00:23:30.651 asserts 130 130 130 0 n/a 00:23:30.651 00:23:30.651 Elapsed time = 0.561 seconds 00:23:30.651 0 00:23:30.651 07:20:27 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90494 00:23:30.651 07:20:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90494 ']' 00:23:30.651 07:20:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90494 00:23:30.651 07:20:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:23:30.908 07:20:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:30.908 07:20:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90494 00:23:30.908 07:20:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:30.908 07:20:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:30.908 07:20:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90494' 00:23:30.908 killing process with pid 90494 00:23:30.908 07:20:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90494 00:23:30.908 07:20:27 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90494 00:23:32.283 07:20:29 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:23:32.283 00:23:32.283 real 0m2.774s 00:23:32.283 user 0m6.869s 00:23:32.283 sys 0m0.402s 00:23:32.283 07:20:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.283 07:20:29 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:32.283 ************************************ 00:23:32.283 END TEST bdev_bounds 00:23:32.283 ************************************ 00:23:32.283 07:20:29 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:23:32.283 07:20:29 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:32.284 07:20:29 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.284 07:20:29 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:32.284 ************************************ 00:23:32.284 START TEST bdev_nbd 00:23:32.284 ************************************ 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90554 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90554 /var/tmp/spdk-nbd.sock 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90554 ']' 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:32.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.284 07:20:29 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:32.284 [2024-11-20 07:20:29.472656] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:23:32.284 [2024-11-20 07:20:29.473161] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.542 [2024-11-20 07:20:29.652060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.542 [2024-11-20 07:20:29.786566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:33.477 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:33.736 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:33.736 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:33.736 1+0 records in 00:23:33.736 1+0 records out 00:23:33.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400981 s, 10.2 MB/s 00:23:33.736 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:33.736 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:33.736 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:33.736 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:33.736 07:20:30 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:33.736 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:33.736 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:23:33.736 07:20:30 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:33.994 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:23:33.994 { 00:23:33.994 "nbd_device": "/dev/nbd0", 00:23:33.994 "bdev_name": "raid5f" 00:23:33.994 } 00:23:33.994 ]' 00:23:33.994 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:23:33.994 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:23:33.994 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:23:33.994 { 00:23:33.994 "nbd_device": "/dev/nbd0", 00:23:33.994 "bdev_name": "raid5f" 00:23:33.994 } 00:23:33.994 ]' 00:23:33.994 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:33.994 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:33.994 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:33.994 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:33.994 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:33.994 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:33.994 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:34.253 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:34.253 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:34.253 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:34.253 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:34.253 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:34.253 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:34.253 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:34.253 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:34.253 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:34.253 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:34.253 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:34.511 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:34.511 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:34.511 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:34.511 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:34.512 07:20:31 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:23:34.770 /dev/nbd0 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.029 1+0 records in 00:23:35.029 1+0 records out 00:23:35.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301026 s, 13.6 MB/s 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:35.029 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:35.287 { 00:23:35.287 "nbd_device": "/dev/nbd0", 00:23:35.287 "bdev_name": "raid5f" 00:23:35.287 } 00:23:35.287 ]' 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:35.287 { 00:23:35.287 "nbd_device": "/dev/nbd0", 00:23:35.287 "bdev_name": "raid5f" 00:23:35.287 } 00:23:35.287 ]' 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:23:35.287 256+0 records in 00:23:35.287 256+0 records out 00:23:35.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00734987 s, 143 MB/s 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:35.287 256+0 records in 00:23:35.287 256+0 records out 00:23:35.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0411552 s, 25.5 MB/s 00:23:35.287 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:35.288 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:35.564 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:35.564 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:35.564 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:35.564 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:35.564 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:35.565 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:35.565 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:35.565 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:35.565 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:35.565 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:35.565 07:20:32 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:23:35.833 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:23:36.399 malloc_lvol_verify 00:23:36.399 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:23:36.658 76c913fb-f7c9-4379-bd62-0c4ade59a509 00:23:36.658 07:20:33 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:23:36.915 53e60d7e-3701-4411-b8f3-2c26fb249a56 00:23:36.915 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:23:37.173 /dev/nbd0 00:23:37.173 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:23:37.173 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:23:37.173 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:23:37.173 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:23:37.173 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:23:37.173 mke2fs 1.47.0 (5-Feb-2023) 00:23:37.173 Discarding device blocks: 0/4096 done 00:23:37.173 Creating filesystem with 4096 1k blocks and 1024 inodes 00:23:37.173 00:23:37.173 Allocating group tables: 0/1 done 00:23:37.173 Writing inode tables: 0/1 done 00:23:37.173 Creating journal (1024 blocks): done 00:23:37.173 Writing superblocks and filesystem accounting information: 0/1 done 00:23:37.173 00:23:37.173 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:37.173 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:37.173 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:37.173 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:37.173 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:37.173 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.174 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:37.431 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90554 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90554 ']' 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90554 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90554 00:23:37.690 killing process with pid 90554 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90554' 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90554 00:23:37.690 07:20:34 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90554 00:23:39.063 07:20:36 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:39.063 00:23:39.063 real 0m6.840s 00:23:39.063 user 0m9.910s 00:23:39.063 sys 0m1.413s 00:23:39.063 07:20:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:39.063 07:20:36 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:39.063 ************************************ 00:23:39.063 END TEST bdev_nbd 00:23:39.063 ************************************ 00:23:39.063 07:20:36 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:23:39.063 07:20:36 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:23:39.063 07:20:36 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:23:39.063 07:20:36 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:23:39.063 07:20:36 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:39.063 07:20:36 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.063 07:20:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:39.063 ************************************ 00:23:39.063 START TEST bdev_fio 00:23:39.063 ************************************ 00:23:39.063 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:23:39.063 07:20:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:23:39.063 07:20:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:23:39.063 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:23:39.063 07:20:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:23:39.063 07:20:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:39.064 ************************************ 00:23:39.064 START TEST bdev_fio_rw_verify 00:23:39.064 ************************************ 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:23:39.064 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:39.322 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:39.322 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:39.322 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:23:39.322 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:39.322 07:20:36 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:39.322 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:39.322 fio-3.35 00:23:39.322 Starting 1 thread 00:23:51.542 00:23:51.542 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90764: Wed Nov 20 07:20:47 2024 00:23:51.542 read: IOPS=8442, BW=33.0MiB/s (34.6MB/s)(330MiB/10001msec) 00:23:51.542 slat (usec): min=23, max=436, avg=29.82, stdev= 4.41 00:23:51.542 clat (usec): min=14, max=703, avg=190.14, stdev=71.30 00:23:51.542 lat (usec): min=42, max=734, avg=219.96, stdev=71.96 00:23:51.542 clat percentiles (usec): 00:23:51.542 | 50.000th=[ 194], 99.000th=[ 322], 99.900th=[ 383], 99.990th=[ 457], 00:23:51.542 | 99.999th=[ 701] 00:23:51.542 write: IOPS=8826, BW=34.5MiB/s (36.2MB/s)(340MiB/9869msec); 0 zone resets 00:23:51.542 slat (usec): min=12, max=238, avg=23.69, stdev= 5.17 00:23:51.542 clat (usec): min=79, max=1592, avg=429.74, stdev=56.15 00:23:51.542 lat (usec): min=101, max=1630, avg=453.44, stdev=57.59 00:23:51.542 clat percentiles (usec): 00:23:51.542 | 50.000th=[ 437], 99.000th=[ 562], 99.900th=[ 693], 99.990th=[ 1172], 00:23:51.542 | 99.999th=[ 1598] 00:23:51.542 bw ( KiB/s): min=32064, max=36784, per=99.26%, avg=35044.00, stdev=1262.02, samples=19 00:23:51.542 iops : min= 8016, max= 9196, avg=8761.00, stdev=315.51, samples=19 00:23:51.542 lat (usec) : 20=0.01%, 50=0.01%, 100=5.92%, 250=30.52%, 500=60.58% 00:23:51.542 lat (usec) : 750=2.95%, 1000=0.01% 00:23:51.542 lat (msec) : 2=0.01% 00:23:51.542 cpu : usr=98.53%, sys=0.57%, ctx=19, majf=0, minf=7318 00:23:51.542 IO depths : 1=7.8%, 2=20.0%, 4=55.0%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:51.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.542 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.542 issued rwts: total=84429,87110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.542 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:51.542 00:23:51.542 Run status group 0 (all jobs): 00:23:51.542 READ: bw=33.0MiB/s (34.6MB/s), 33.0MiB/s-33.0MiB/s (34.6MB/s-34.6MB/s), io=330MiB (346MB), run=10001-10001msec 00:23:51.542 WRITE: bw=34.5MiB/s (36.2MB/s), 34.5MiB/s-34.5MiB/s (36.2MB/s-36.2MB/s), io=340MiB (357MB), run=9869-9869msec 00:23:52.109 ----------------------------------------------------- 00:23:52.109 Suppressions used: 00:23:52.109 count bytes template 00:23:52.109 1 7 /usr/src/fio/parse.c 00:23:52.109 70 6720 /usr/src/fio/iolog.c 00:23:52.109 1 8 libtcmalloc_minimal.so 00:23:52.109 1 904 libcrypto.so 00:23:52.109 ----------------------------------------------------- 00:23:52.109 00:23:52.109 00:23:52.109 real 0m12.806s 00:23:52.109 user 0m13.002s 00:23:52.109 sys 0m0.989s 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:23:52.109 ************************************ 00:23:52.109 END TEST bdev_fio_rw_verify 00:23:52.109 ************************************ 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "44d0f721-58d6-4a9c-996c-92135dd1263a"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "44d0f721-58d6-4a9c-996c-92135dd1263a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "44d0f721-58d6-4a9c-996c-92135dd1263a",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "fa9dbcce-efa3-4051-976a-cc30ca863dc3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "228f5032-b2ef-411d-bfd6-9da076ea11c3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "9219b7dd-a6a9-4f42-ad17-0d1be66d5396",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:23:52.109 /home/vagrant/spdk_repo/spdk 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:23:52.109 00:23:52.109 real 0m13.018s 00:23:52.109 user 0m13.096s 00:23:52.109 sys 0m1.071s 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.109 07:20:49 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:52.109 ************************************ 00:23:52.109 END TEST bdev_fio 00:23:52.109 ************************************ 00:23:52.109 07:20:49 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:52.109 07:20:49 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:52.109 07:20:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:52.109 07:20:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.109 07:20:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:52.109 ************************************ 00:23:52.109 START TEST bdev_verify 00:23:52.109 ************************************ 00:23:52.109 07:20:49 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:52.109 [2024-11-20 07:20:49.385892] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:23:52.109 [2024-11-20 07:20:49.386044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90918 ] 00:23:52.368 [2024-11-20 07:20:49.560826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:52.626 [2024-11-20 07:20:49.690597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.626 [2024-11-20 07:20:49.690603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.193 Running I/O for 5 seconds... 00:23:55.271 12200.00 IOPS, 47.66 MiB/s [2024-11-20T07:20:53.625Z] 12706.50 IOPS, 49.63 MiB/s [2024-11-20T07:20:54.561Z] 12692.33 IOPS, 49.58 MiB/s [2024-11-20T07:20:55.496Z] 12766.00 IOPS, 49.87 MiB/s [2024-11-20T07:20:55.496Z] 12811.80 IOPS, 50.05 MiB/s 00:23:58.176 Latency(us) 00:23:58.176 [2024-11-20T07:20:55.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.176 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:58.176 Verification LBA range: start 0x0 length 0x2000 00:23:58.176 raid5f : 5.02 6440.70 25.16 0.00 0.00 29875.20 342.57 26810.18 00:23:58.176 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:58.176 Verification LBA range: start 0x2000 length 0x2000 00:23:58.176 raid5f : 5.02 6368.15 24.88 0.00 0.00 30245.03 129.40 61484.68 00:23:58.176 [2024-11-20T07:20:55.496Z] =================================================================================================================== 00:23:58.176 [2024-11-20T07:20:55.496Z] Total : 12808.85 50.03 0.00 0.00 30059.12 129.40 61484.68 00:23:59.553 00:23:59.553 real 0m7.270s 00:23:59.553 user 0m13.378s 00:23:59.553 sys 0m0.300s 00:23:59.553 07:20:56 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:59.553 07:20:56 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:59.553 ************************************ 00:23:59.553 END TEST bdev_verify 00:23:59.553 ************************************ 00:23:59.553 07:20:56 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:59.553 07:20:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:59.553 07:20:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:59.553 07:20:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:23:59.553 ************************************ 00:23:59.553 START TEST bdev_verify_big_io 00:23:59.553 ************************************ 00:23:59.553 07:20:56 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:59.553 [2024-11-20 07:20:56.699577] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:23:59.553 [2024-11-20 07:20:56.699739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91013 ] 00:23:59.812 [2024-11-20 07:20:56.878333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:59.812 [2024-11-20 07:20:57.041771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.812 [2024-11-20 07:20:57.041780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.379 Running I/O for 5 seconds... 00:24:02.689 506.00 IOPS, 31.62 MiB/s [2024-11-20T07:21:00.943Z] 600.50 IOPS, 37.53 MiB/s [2024-11-20T07:21:02.023Z] 676.00 IOPS, 42.25 MiB/s [2024-11-20T07:21:02.987Z] 713.50 IOPS, 44.59 MiB/s [2024-11-20T07:21:02.987Z] 748.60 IOPS, 46.79 MiB/s 00:24:05.667 Latency(us) 00:24:05.667 [2024-11-20T07:21:02.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.667 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:05.667 Verification LBA range: start 0x0 length 0x200 00:24:05.667 raid5f : 5.22 376.97 23.56 0.00 0.00 8351341.41 187.11 423243.40 00:24:05.667 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:05.667 Verification LBA range: start 0x200 length 0x200 00:24:05.667 raid5f : 5.18 392.33 24.52 0.00 0.00 8023180.37 177.80 419430.40 00:24:05.667 [2024-11-20T07:21:02.987Z] =================================================================================================================== 00:24:05.667 [2024-11-20T07:21:02.987Z] Total : 769.30 48.08 0.00 0.00 8184593.92 177.80 423243.40 00:24:07.082 00:24:07.082 real 0m7.496s 00:24:07.082 user 0m13.797s 00:24:07.082 sys 0m0.297s 00:24:07.082 07:21:04 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:07.082 07:21:04 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:24:07.082 ************************************ 00:24:07.082 END TEST bdev_verify_big_io 00:24:07.082 ************************************ 00:24:07.082 07:21:04 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:07.082 07:21:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:07.082 07:21:04 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:07.082 07:21:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:07.082 ************************************ 00:24:07.082 START TEST bdev_write_zeroes 00:24:07.082 ************************************ 00:24:07.082 07:21:04 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:07.082 [2024-11-20 07:21:04.247046] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:24:07.082 [2024-11-20 07:21:04.247204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91112 ] 00:24:07.341 [2024-11-20 07:21:04.420591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.341 [2024-11-20 07:21:04.549994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.909 Running I/O for 1 seconds... 00:24:08.845 19815.00 IOPS, 77.40 MiB/s 00:24:08.845 Latency(us) 00:24:08.845 [2024-11-20T07:21:06.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.845 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:08.845 raid5f : 1.01 19779.11 77.26 0.00 0.00 6445.86 1936.29 8757.99 00:24:08.845 [2024-11-20T07:21:06.165Z] =================================================================================================================== 00:24:08.845 [2024-11-20T07:21:06.165Z] Total : 19779.11 77.26 0.00 0.00 6445.86 1936.29 8757.99 00:24:10.222 00:24:10.222 real 0m3.209s 00:24:10.222 user 0m2.808s 00:24:10.222 sys 0m0.265s 00:24:10.222 07:21:07 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.222 07:21:07 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:24:10.222 ************************************ 00:24:10.222 END TEST bdev_write_zeroes 00:24:10.222 ************************************ 00:24:10.222 07:21:07 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:10.222 07:21:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:10.222 07:21:07 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.222 07:21:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:10.222 ************************************ 00:24:10.222 START TEST bdev_json_nonenclosed 00:24:10.222 ************************************ 00:24:10.222 07:21:07 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:10.222 [2024-11-20 07:21:07.529462] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:24:10.222 [2024-11-20 07:21:07.529630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91165 ] 00:24:10.481 [2024-11-20 07:21:07.708050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.740 [2024-11-20 07:21:07.861009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.740 [2024-11-20 07:21:07.861138] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:24:10.740 [2024-11-20 07:21:07.861187] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:10.740 [2024-11-20 07:21:07.861204] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:11.000 00:24:11.000 real 0m0.722s 00:24:11.000 user 0m0.452s 00:24:11.000 sys 0m0.165s 00:24:11.000 07:21:08 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.000 07:21:08 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:24:11.000 ************************************ 00:24:11.000 END TEST bdev_json_nonenclosed 00:24:11.000 ************************************ 00:24:11.000 07:21:08 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:11.000 07:21:08 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:11.000 07:21:08 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.000 07:21:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:11.000 ************************************ 00:24:11.000 START TEST bdev_json_nonarray 00:24:11.000 ************************************ 00:24:11.000 07:21:08 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:11.000 [2024-11-20 07:21:08.284679] Starting SPDK v25.01-pre git sha1 097b7c969 / DPDK 24.03.0 initialization... 00:24:11.000 [2024-11-20 07:21:08.284836] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91191 ] 00:24:11.258 [2024-11-20 07:21:08.464136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.541 [2024-11-20 07:21:08.597463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.541 [2024-11-20 07:21:08.597599] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:24:11.541 [2024-11-20 07:21:08.597630] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:11.541 [2024-11-20 07:21:08.597658] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:11.809 00:24:11.809 real 0m0.684s 00:24:11.809 user 0m0.457s 00:24:11.809 sys 0m0.121s 00:24:11.809 ************************************ 00:24:11.809 END TEST bdev_json_nonarray 00:24:11.809 ************************************ 00:24:11.809 07:21:08 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.809 07:21:08 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:24:11.809 07:21:08 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:24:11.809 07:21:08 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:24:11.809 07:21:08 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:24:11.809 07:21:08 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:24:11.809 07:21:08 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:24:11.809 07:21:08 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:24:11.809 07:21:08 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:11.809 07:21:08 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:24:11.809 07:21:08 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:24:11.809 07:21:08 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:24:11.809 07:21:08 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:24:11.809 ************************************ 00:24:11.809 END TEST blockdev_raid5f 00:24:11.809 ************************************ 00:24:11.809 00:24:11.810 real 0m49.059s 00:24:11.810 user 1m7.017s 00:24:11.810 sys 0m5.337s 00:24:11.810 07:21:08 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.810 07:21:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:24:11.810 07:21:08 -- spdk/autotest.sh@194 -- # uname -s 00:24:11.810 07:21:08 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:24:11.810 07:21:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:11.810 07:21:08 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:24:11.810 07:21:08 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:08 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:08 -- spdk/autotest.sh@260 -- # timing_exit lib 00:24:11.810 07:21:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:11.810 07:21:08 -- common/autotest_common.sh@10 -- # set +x 00:24:11.810 07:21:09 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:24:11.810 07:21:09 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:24:11.810 07:21:09 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:24:11.810 07:21:09 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:24:11.810 07:21:09 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:24:11.810 07:21:09 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:24:11.810 07:21:09 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:24:11.810 07:21:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:11.810 07:21:09 -- common/autotest_common.sh@10 -- # set +x 00:24:11.810 07:21:09 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:24:11.810 07:21:09 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:24:11.810 07:21:09 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:24:11.810 07:21:09 -- common/autotest_common.sh@10 -- # set +x 00:24:13.714 INFO: APP EXITING 00:24:13.714 INFO: killing all VMs 00:24:13.714 INFO: killing vhost app 00:24:13.714 INFO: EXIT DONE 00:24:13.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:13.715 Waiting for block devices as requested 00:24:13.715 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:13.973 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:14.541 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:14.800 Cleaning 00:24:14.800 Removing: /var/run/dpdk/spdk0/config 00:24:14.800 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:24:14.800 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:24:14.800 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:24:14.800 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:24:14.800 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:24:14.800 Removing: /var/run/dpdk/spdk0/hugepage_info 00:24:14.800 Removing: /dev/shm/spdk_tgt_trace.pid56748 00:24:14.800 Removing: /var/run/dpdk/spdk0 00:24:14.800 Removing: /var/run/dpdk/spdk_pid56518 00:24:14.800 Removing: /var/run/dpdk/spdk_pid56748 00:24:14.800 Removing: /var/run/dpdk/spdk_pid56977 00:24:14.800 Removing: /var/run/dpdk/spdk_pid57081 00:24:14.800 Removing: /var/run/dpdk/spdk_pid57137 00:24:14.800 Removing: /var/run/dpdk/spdk_pid57265 00:24:14.800 Removing: /var/run/dpdk/spdk_pid57294 00:24:14.800 Removing: /var/run/dpdk/spdk_pid57499 00:24:14.800 Removing: /var/run/dpdk/spdk_pid57610 00:24:14.800 Removing: /var/run/dpdk/spdk_pid57717 00:24:14.800 Removing: /var/run/dpdk/spdk_pid57839 00:24:14.800 Removing: /var/run/dpdk/spdk_pid57947 00:24:14.800 Removing: /var/run/dpdk/spdk_pid57987 00:24:14.800 Removing: /var/run/dpdk/spdk_pid58023 00:24:14.800 Removing: /var/run/dpdk/spdk_pid58099 00:24:14.800 Removing: /var/run/dpdk/spdk_pid58211 00:24:14.800 Removing: /var/run/dpdk/spdk_pid58691 00:24:14.800 Removing: /var/run/dpdk/spdk_pid58761 00:24:14.800 Removing: /var/run/dpdk/spdk_pid58835 00:24:14.800 Removing: /var/run/dpdk/spdk_pid58851 00:24:14.800 Removing: /var/run/dpdk/spdk_pid59004 00:24:14.800 Removing: /var/run/dpdk/spdk_pid59025 00:24:14.800 Removing: /var/run/dpdk/spdk_pid59181 00:24:14.800 Removing: /var/run/dpdk/spdk_pid59197 00:24:14.800 Removing: /var/run/dpdk/spdk_pid59267 00:24:14.800 Removing: /var/run/dpdk/spdk_pid59290 00:24:14.800 Removing: /var/run/dpdk/spdk_pid59354 00:24:14.800 Removing: /var/run/dpdk/spdk_pid59372 00:24:14.800 Removing: /var/run/dpdk/spdk_pid59573 00:24:14.800 Removing: /var/run/dpdk/spdk_pid59609 00:24:14.800 Removing: /var/run/dpdk/spdk_pid59698 00:24:14.800 Removing: /var/run/dpdk/spdk_pid61080 00:24:14.800 Removing: /var/run/dpdk/spdk_pid61292 00:24:14.800 Removing: /var/run/dpdk/spdk_pid61437 00:24:14.800 Removing: /var/run/dpdk/spdk_pid62097 00:24:14.800 Removing: /var/run/dpdk/spdk_pid62303 00:24:14.800 Removing: /var/run/dpdk/spdk_pid62453 00:24:14.800 Removing: /var/run/dpdk/spdk_pid63103 00:24:14.800 Removing: /var/run/dpdk/spdk_pid63439 00:24:14.800 Removing: /var/run/dpdk/spdk_pid63584 00:24:14.800 Removing: /var/run/dpdk/spdk_pid65008 00:24:14.800 Removing: /var/run/dpdk/spdk_pid65262 00:24:14.800 Removing: /var/run/dpdk/spdk_pid65408 00:24:14.800 Removing: /var/run/dpdk/spdk_pid66821 00:24:14.800 Removing: /var/run/dpdk/spdk_pid67074 00:24:14.800 Removing: /var/run/dpdk/spdk_pid67225 00:24:14.800 Removing: /var/run/dpdk/spdk_pid68638 00:24:14.800 Removing: /var/run/dpdk/spdk_pid69095 00:24:14.800 Removing: /var/run/dpdk/spdk_pid69241 00:24:14.800 Removing: /var/run/dpdk/spdk_pid70755 00:24:14.800 Removing: /var/run/dpdk/spdk_pid71019 00:24:14.800 Removing: /var/run/dpdk/spdk_pid71170 00:24:14.800 Removing: /var/run/dpdk/spdk_pid72680 00:24:14.800 Removing: /var/run/dpdk/spdk_pid72950 00:24:14.800 Removing: /var/run/dpdk/spdk_pid73097 00:24:14.800 Removing: /var/run/dpdk/spdk_pid74605 00:24:14.800 Removing: /var/run/dpdk/spdk_pid75103 00:24:14.800 Removing: /var/run/dpdk/spdk_pid75249 00:24:14.800 Removing: /var/run/dpdk/spdk_pid75387 00:24:14.800 Removing: /var/run/dpdk/spdk_pid75844 00:24:14.800 Removing: /var/run/dpdk/spdk_pid76606 00:24:14.800 Removing: /var/run/dpdk/spdk_pid77008 00:24:14.800 Removing: /var/run/dpdk/spdk_pid77709 00:24:14.800 Removing: /var/run/dpdk/spdk_pid78191 00:24:15.059 Removing: /var/run/dpdk/spdk_pid78984 00:24:15.059 Removing: /var/run/dpdk/spdk_pid79404 00:24:15.059 Removing: /var/run/dpdk/spdk_pid81407 00:24:15.059 Removing: /var/run/dpdk/spdk_pid81862 00:24:15.059 Removing: /var/run/dpdk/spdk_pid82316 00:24:15.059 Removing: /var/run/dpdk/spdk_pid84441 00:24:15.059 Removing: /var/run/dpdk/spdk_pid84938 00:24:15.059 Removing: /var/run/dpdk/spdk_pid85448 00:24:15.059 Removing: /var/run/dpdk/spdk_pid86527 00:24:15.059 Removing: /var/run/dpdk/spdk_pid86851 00:24:15.059 Removing: /var/run/dpdk/spdk_pid87801 00:24:15.059 Removing: /var/run/dpdk/spdk_pid88135 00:24:15.059 Removing: /var/run/dpdk/spdk_pid89092 00:24:15.059 Removing: /var/run/dpdk/spdk_pid89427 00:24:15.059 Removing: /var/run/dpdk/spdk_pid90104 00:24:15.059 Removing: /var/run/dpdk/spdk_pid90379 00:24:15.059 Removing: /var/run/dpdk/spdk_pid90452 00:24:15.059 Removing: /var/run/dpdk/spdk_pid90494 00:24:15.059 Removing: /var/run/dpdk/spdk_pid90744 00:24:15.059 Removing: /var/run/dpdk/spdk_pid90918 00:24:15.059 Removing: /var/run/dpdk/spdk_pid91013 00:24:15.059 Removing: /var/run/dpdk/spdk_pid91112 00:24:15.059 Removing: /var/run/dpdk/spdk_pid91165 00:24:15.059 Removing: /var/run/dpdk/spdk_pid91191 00:24:15.059 Clean 00:24:15.059 07:21:12 -- common/autotest_common.sh@1453 -- # return 0 00:24:15.059 07:21:12 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:24:15.059 07:21:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.059 07:21:12 -- common/autotest_common.sh@10 -- # set +x 00:24:15.059 07:21:12 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:24:15.059 07:21:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:15.059 07:21:12 -- common/autotest_common.sh@10 -- # set +x 00:24:15.059 07:21:12 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:24:15.059 07:21:12 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:24:15.059 07:21:12 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:24:15.059 07:21:12 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:24:15.059 07:21:12 -- spdk/autotest.sh@398 -- # hostname 00:24:15.059 07:21:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:24:15.318 geninfo: WARNING: invalid characters removed from testname! 00:24:47.393 07:21:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:47.393 07:21:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:49.924 07:21:47 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:52.511 07:21:49 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:55.873 07:21:52 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:24:58.404 07:21:55 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:01.690 07:21:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:01.690 07:21:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:25:01.690 07:21:58 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:25:01.690 07:21:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:01.690 07:21:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:01.690 07:21:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:01.690 + [[ -n 5211 ]] 00:25:01.690 + sudo kill 5211 00:25:01.699 [Pipeline] } 00:25:01.717 [Pipeline] // timeout 00:25:01.723 [Pipeline] } 00:25:01.739 [Pipeline] // stage 00:25:01.746 [Pipeline] } 00:25:01.763 [Pipeline] // catchError 00:25:01.772 [Pipeline] stage 00:25:01.775 [Pipeline] { (Stop VM) 00:25:01.788 [Pipeline] sh 00:25:02.067 + vagrant halt 00:25:06.252 ==> default: Halting domain... 00:25:11.566 [Pipeline] sh 00:25:11.847 + vagrant destroy -f 00:25:16.082 ==> default: Removing domain... 00:25:16.093 [Pipeline] sh 00:25:16.373 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:25:16.383 [Pipeline] } 00:25:16.398 [Pipeline] // stage 00:25:16.403 [Pipeline] } 00:25:16.418 [Pipeline] // dir 00:25:16.423 [Pipeline] } 00:25:16.437 [Pipeline] // wrap 00:25:16.443 [Pipeline] } 00:25:16.456 [Pipeline] // catchError 00:25:16.466 [Pipeline] stage 00:25:16.468 [Pipeline] { (Epilogue) 00:25:16.482 [Pipeline] sh 00:25:16.763 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:23.351 [Pipeline] catchError 00:25:23.354 [Pipeline] { 00:25:23.368 [Pipeline] sh 00:25:23.649 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:23.649 Artifacts sizes are good 00:25:23.658 [Pipeline] } 00:25:23.673 [Pipeline] // catchError 00:25:23.684 [Pipeline] archiveArtifacts 00:25:23.691 Archiving artifacts 00:25:23.781 [Pipeline] cleanWs 00:25:23.793 [WS-CLEANUP] Deleting project workspace... 00:25:23.793 [WS-CLEANUP] Deferred wipeout is used... 00:25:23.799 [WS-CLEANUP] done 00:25:23.801 [Pipeline] } 00:25:23.817 [Pipeline] // stage 00:25:23.823 [Pipeline] } 00:25:23.837 [Pipeline] // node 00:25:23.842 [Pipeline] End of Pipeline 00:25:23.867 Finished: SUCCESS